Removal of deprecated code

- Remove the Nios II target and hardware
 - Remove pvrdma device and rdmacm-mux helper
 - Remove GlusterFS RDMA protocol handling
 - Update Sriram Yagnaraman mail address
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmYpE0YACgkQ4+MsLN6t
 wN5PIA//egomANjRHAUAf9tdjljgT/JR49ejM7iInyxspR/xaiq0TlP2kP6aDNps
 y1HAWBwfj5lGxeMgQ1mSKJGka3v2AIPWb7RbNT+9AaiWHv+sx5OrEytozUsFHLo8
 gSgRQocq0NY2a9dPbtkDqfbmq/rkCC7wgZzwroHsyOdiqYsWDKPJFleBDMjGmEaf
 colhiDmhUPgvE3NNpwfEVNh/2SzxUxY8k5FHal6qij5z56ZqBglgnziDZEvGVCZ1
 uF4Hca/kh7TV2MVsdStPbGWZYDhJ/Np/2FnRoThD1Hc4qq8d/SH997m2F94tSOud
 YeH54Vp5lmCeYgba5y8VP0ZPx/b9XnTtLvKggNdoqB+T2LBWPRt8kehqoaxvammF
 ALzbY/t2vUxL6nIVbosOaTyqVOXvynk3/Js5S0jbnlu+vP2WvvFEzfYKIs2DIA8w
 z56o/rG4KfyxF0aDB+CvLNwtJS8THqeivPqmYoKTdN9FPpN2RyBNLITrKo389ygF
 3oWy3+xsKGIPdNFY0a4l25xntqWNhND89ejzyL9M6G1cQ9RdEmTIUGTrinPQQmfP
 oHIJMBeTdj7EqPL4LB3BR/htw9U5PobeMNYKFsRkS39PjGDqba5wbIdk3w5/Rcxa
 s/PKdspDKWPwZ5jhcLD0qxAGJFnqM2UFjPo+U8qyI3RXKXFAn0E=
 =c8Aj
 -----END PGP SIGNATURE-----

Merge tag 'housekeeping-20240424' of https://github.com/philmd/qemu into staging

Removal of deprecated code

- Remove the Nios II target and hardware
- Remove pvrdma device and rdmacm-mux helper
- Remove GlusterFS RDMA protocol handling
- Update Sriram Yagnaraman mail address

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmYpE0YACgkQ4+MsLN6t
# wN5PIA//egomANjRHAUAf9tdjljgT/JR49ejM7iInyxspR/xaiq0TlP2kP6aDNps
# y1HAWBwfj5lGxeMgQ1mSKJGka3v2AIPWb7RbNT+9AaiWHv+sx5OrEytozUsFHLo8
# gSgRQocq0NY2a9dPbtkDqfbmq/rkCC7wgZzwroHsyOdiqYsWDKPJFleBDMjGmEaf
# colhiDmhUPgvE3NNpwfEVNh/2SzxUxY8k5FHal6qij5z56ZqBglgnziDZEvGVCZ1
# uF4Hca/kh7TV2MVsdStPbGWZYDhJ/Np/2FnRoThD1Hc4qq8d/SH997m2F94tSOud
# YeH54Vp5lmCeYgba5y8VP0ZPx/b9XnTtLvKggNdoqB+T2LBWPRt8kehqoaxvammF
# ALzbY/t2vUxL6nIVbosOaTyqVOXvynk3/Js5S0jbnlu+vP2WvvFEzfYKIs2DIA8w
# z56o/rG4KfyxF0aDB+CvLNwtJS8THqeivPqmYoKTdN9FPpN2RyBNLITrKo389ygF
# 3oWy3+xsKGIPdNFY0a4l25xntqWNhND89ejzyL9M6G1cQ9RdEmTIUGTrinPQQmfP
# oHIJMBeTdj7EqPL4LB3BR/htw9U5PobeMNYKFsRkS39PjGDqba5wbIdk3w5/Rcxa
# s/PKdspDKWPwZ5jhcLD0qxAGJFnqM2UFjPo+U8qyI3RXKXFAn0E=
# =c8Aj
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 24 Apr 2024 07:12:22 AM PDT
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]

* tag 'housekeeping-20240424' of https://github.com/philmd/qemu:
  block/gluster: Remove deprecated RDMA protocol handling
  hw/rdma: Remove deprecated pvrdma device and rdmacm-mux helper
  hw/timer: Remove the ALTERA_TIMER model
  target/nios2: Remove the deprecated Nios II target
  MAINTAINERS: Update Sriram Yagnaraman mail address

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
Richard Henderson 2024-04-24 11:49:57 -07:00
commit 85b597413d
135 changed files with 40 additions and 17218 deletions

View File

@ -164,7 +164,7 @@ build-system-centos:
CONFIGURE_ARGS: --disable-nettle --enable-gcrypt --enable-vfio-user-server
--enable-modules --enable-trace-backends=dtrace --enable-docs
TARGETS: ppc64-softmmu or1k-softmmu s390x-softmmu
x86_64-softmmu rx-softmmu sh4-softmmu nios2-softmmu
x86_64-softmmu rx-softmmu sh4-softmmu
MAKE_CHECK_ARGS: check-build
# Previous QEMU release. Used for cross-version migration tests.
@ -254,7 +254,7 @@ avocado-system-centos:
IMAGE: centos8
MAKE_CHECK_ARGS: check-avocado
AVOCADO_TAGS: arch:ppc64 arch:or1k arch:s390x arch:x86_64 arch:rx
arch:sh4 arch:nios2
arch:sh4
build-system-opensuse:
extends:

View File

@ -72,7 +72,7 @@
- ../configure --enable-werror --disable-docs $QEMU_CONFIGURE_OPTS
--disable-system --target-list-exclude="aarch64_be-linux-user
alpha-linux-user cris-linux-user m68k-linux-user microblazeel-linux-user
nios2-linux-user or1k-linux-user ppc-linux-user sparc-linux-user
or1k-linux-user ppc-linux-user sparc-linux-user
xtensa-linux-user $CROSS_SKIP_TARGETS"
- make -j$(expr $(nproc) + 1) all check-build $MAKE_CHECK_ARGS

View File

@ -167,7 +167,7 @@ cross-win64-system:
IMAGE: fedora-win64-cross
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
m68k-softmmu microblazeel-softmmu nios2-softmmu
m68k-softmmu microblazeel-softmmu
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu
tricore-softmmu xtensaeb-softmmu
artifacts:

View File

@ -100,6 +100,7 @@ Philippe Mathieu-Daudé <philmd@linaro.org> <f4bug@amsat.org>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@redhat.com>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@fungible.com>
Roman Bolshakov <rbolshakov@ddn.com> <r.bolshakov@yadro.com>
Sriram Yagnaraman <sriram.yagnaraman@ericsson.com> <sriram.yagnaraman@est.tech>
Stefan Brankovic <stefan.brankovic@syrmia.com> <stefan.brankovic@rt-rk.com.com>
Stefan Weil <sw@weilnetz.de> Stefan Weil <stefan@weilnetz.de>
Taylor Simpson <ltaylorsimpson@gmail.com> <tsimpson@quicinc.com>

View File

@ -35,9 +35,6 @@ config VHOST_KERNEL
config VIRTFS
bool
config PVRDMA
bool
config MULTIPROCESS_ALLOWED
bool
imply MULTIPROCESS

View File

@ -291,19 +291,6 @@ F: disas/*mips.c
F: docs/system/cpu-models-mips.rst.inc
F: tests/tcg/mips/
NiosII TCG CPUs
R: Chris Wulff <crwulff@gmail.com>
R: Marek Vasut <marex@denx.de>
S: Orphan
F: target/nios2/
F: hw/nios2/
F: hw/intc/nios2_vic.c
F: disas/nios2.c
F: include/hw/intc/nios2_vic.h
F: configs/devices/nios2-softmmu/default.mak
F: tests/docker/dockerfiles/debian-nios2-cross.d/build-toolchain.sh
F: tests/tcg/nios2/
OpenRISC TCG CPUs
M: Stafford Horne <shorne@gmail.com>
S: Odd Fixes
@ -2478,7 +2465,7 @@ F: tests/qtest/libqos/e1000e.*
igb
M: Akihiko Odaki <akihiko.odaki@daynix.com>
R: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
R: Sriram Yagnaraman <sriram.yagnaraman@ericsson.com>
S: Maintained
F: docs/system/devices/igb.rst
F: hw/net/igb*
@ -4057,16 +4044,6 @@ F: block/replication.c
F: tests/unit/test-replication.c
F: docs/block-replication.txt
PVRDMA
M: Yuval Shaia <yuval.shaia.ml@gmail.com>
M: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
S: Odd Fixes
F: hw/rdma/*
F: hw/rdma/vmw/*
F: docs/pvrdma.txt
F: contrib/rdmacm-mux/*
F: qapi/rdma.json
Semihosting
M: Alex Bennée <alex.bennee@linaro.org>
S: Maintained

View File

@ -371,9 +371,6 @@ static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
} else if (!strcmp(uri->scheme, "gluster+unix")) {
gsconf->type = SOCKET_ADDRESS_TYPE_UNIX;
is_unix = true;
} else if (!strcmp(uri->scheme, "gluster+rdma")) {
gsconf->type = SOCKET_ADDRESS_TYPE_INET;
warn_report("rdma feature is not supported, falling back to tcp");
} else {
ret = -EINVAL;
goto out;
@ -1638,44 +1635,8 @@ static BlockDriver bdrv_gluster_unix = {
.strong_runtime_opts = gluster_strong_open_opts,
};
/* rdma is deprecated (actually never supported for volfile fetch).
* Let's maintain it for the protocol compatibility, to make sure things
* won't break immediately. For now, gluster+rdma will fall back to gluster+tcp
* protocol with a warning.
* TODO: remove gluster+rdma interface support
*/
static BlockDriver bdrv_gluster_rdma = {
.format_name = "gluster",
.protocol_name = "gluster+rdma",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_co_create = qemu_gluster_co_create,
.bdrv_co_create_opts = qemu_gluster_co_create_opts,
.bdrv_co_getlength = qemu_gluster_co_getlength,
.bdrv_co_get_allocated_file_size = qemu_gluster_co_get_allocated_file_size,
.bdrv_co_truncate = qemu_gluster_co_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_pdiscard = qemu_gluster_co_pdiscard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_pwrite_zeroes = qemu_gluster_co_pwrite_zeroes,
#endif
.bdrv_co_block_status = qemu_gluster_co_block_status,
.bdrv_refresh_limits = qemu_gluster_refresh_limits,
.create_opts = &qemu_gluster_create_opts,
.strong_runtime_opts = gluster_strong_open_opts,
};
static void bdrv_gluster_init(void)
{
bdrv_register(&bdrv_gluster_rdma);
bdrv_register(&bdrv_gluster_unix);
bdrv_register(&bdrv_gluster_tcp);
bdrv_register(&bdrv_gluster);

View File

@ -1,6 +0,0 @@
# Default configuration for nios2-softmmu
# Boards:
#
CONFIG_NIOS2_10M50=y
CONFIG_NIOS2_GENERIC_NOMMU=y

View File

@ -1 +0,0 @@
TARGET_ARCH=nios2

View File

@ -1,2 +0,0 @@
TARGET_ARCH=nios2
TARGET_NEED_FDT=y

2
configure vendored
View File

@ -1169,7 +1169,6 @@ fi
: ${cross_prefix_mips64="mips64-linux-gnuabi64-"}
: ${cross_prefix_mipsel="mipsel-linux-gnu-"}
: ${cross_prefix_mips="mips-linux-gnu-"}
: ${cross_prefix_nios2="nios2-linux-gnu-"}
: ${cross_prefix_ppc="powerpc-linux-gnu-"}
: ${cross_prefix_ppc64="powerpc64-linux-gnu-"}
: ${cross_prefix_ppc64le="$cross_prefix_ppc64"}
@ -1258,7 +1257,6 @@ probe_target_compiler() {
mips64) container_hosts=x86_64 ;;
mipsel) container_hosts=x86_64 ;;
mips) container_hosts=x86_64 ;;
nios2) container_hosts=x86_64 ;;
ppc) container_hosts=x86_64 ;;
ppc64|ppc64le) container_hosts=x86_64 ;;
riscv64) container_hosts=x86_64 ;;

View File

@ -1,831 +0,0 @@
/*
* QEMU paravirtual RDMA - rdmacm-mux implementation
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include <sys/poll.h>
#include <sys/ioctl.h>
#include <pthread.h>
#include <syslog.h>
#include <infiniband/verbs.h>
#include <infiniband/umad.h>
#include <infiniband/umad_types.h>
#include <infiniband/umad_sa.h>
#include <infiniband/umad_cm.h>
#include "rdmacm-mux.h"
#define SCALE_US 1000
#define COMMID_TTL 2 /* How many SCALE_US a context of MAD session is saved */
#define SLEEP_SECS 5 /* This is used both in poll() and thread */
#define SERVER_LISTEN_BACKLOG 10
#define MAX_CLIENTS 4096
#define MAD_RMPP_VERSION 0
#define MAD_METHOD_MASK0 0x8
#define IB_USER_MAD_LONGS_PER_METHOD_MASK (128 / (8 * sizeof(long)))
#define CM_REQ_DGID_POS 80
#define CM_SIDR_REQ_DGID_POS 44
/* The below can be override by command line parameter */
#define UNIX_SOCKET_PATH "/var/run/rdmacm-mux"
/* Has format %s-%s-%d" <path>-<rdma-dev--name>-<port> */
#define SOCKET_PATH_MAX (PATH_MAX - NAME_MAX - sizeof(int) - 2)
#define RDMA_PORT_NUM 1
typedef struct RdmaCmServerArgs {
char unix_socket_path[PATH_MAX];
char rdma_dev_name[NAME_MAX];
int rdma_port_num;
} RdmaCMServerArgs;
typedef struct CommId2FdEntry {
int fd;
int ttl; /* Initialized to 2, decrement each timeout, entry delete when 0 */
__be64 gid_ifid;
} CommId2FdEntry;
typedef struct RdmaCmUMadAgent {
int port_id;
int agent_id;
GHashTable *gid2fd; /* Used to find fd of a given gid */
GHashTable *commid2fd; /* Used to find fd on of a given comm_id */
} RdmaCmUMadAgent;
typedef struct RdmaCmServer {
bool run;
RdmaCMServerArgs args;
struct pollfd fds[MAX_CLIENTS];
int nfds;
RdmaCmUMadAgent umad_agent;
pthread_t umad_recv_thread;
pthread_rwlock_t lock;
} RdmaCMServer;
static RdmaCMServer server = {0};
static void usage(const char *progname)
{
printf("Usage: %s [OPTION]...\n"
"Start a RDMA-CM multiplexer\n"
"\n"
"\t-h Show this help\n"
"\t-d rdma-device-name Name of RDMA device to register with\n"
"\t-s unix-socket-path Path to unix socket to listen on (default %s)\n"
"\t-p rdma-device-port Port number of RDMA device to register with (default %d)\n",
progname, UNIX_SOCKET_PATH, RDMA_PORT_NUM);
}
static void help(const char *progname)
{
fprintf(stderr, "Try '%s -h' for more information.\n", progname);
}
static void parse_args(int argc, char *argv[])
{
int c;
char unix_socket_path[SOCKET_PATH_MAX];
strcpy(server.args.rdma_dev_name, "");
strcpy(unix_socket_path, UNIX_SOCKET_PATH);
server.args.rdma_port_num = RDMA_PORT_NUM;
while ((c = getopt(argc, argv, "hs:d:p:")) != -1) {
switch (c) {
case 'h':
usage(argv[0]);
exit(0);
case 'd':
strncpy(server.args.rdma_dev_name, optarg, NAME_MAX - 1);
break;
case 's':
/* This is temporary, final name will build below */
strncpy(unix_socket_path, optarg, SOCKET_PATH_MAX - 1);
break;
case 'p':
server.args.rdma_port_num = atoi(optarg);
break;
default:
help(argv[0]);
exit(1);
}
}
if (!strcmp(server.args.rdma_dev_name, "")) {
fprintf(stderr, "Missing RDMA device name\n");
help(argv[0]);
exit(1);
}
/* Build unique unix-socket file name */
snprintf(server.args.unix_socket_path, PATH_MAX, "%s-%s-%d",
unix_socket_path, server.args.rdma_dev_name,
server.args.rdma_port_num);
syslog(LOG_INFO, "unix_socket_path=%s", server.args.unix_socket_path);
syslog(LOG_INFO, "rdma-device-name=%s", server.args.rdma_dev_name);
syslog(LOG_INFO, "rdma-device-port=%d", server.args.rdma_port_num);
}
static void hash_tbl_alloc(void)
{
server.umad_agent.gid2fd = g_hash_table_new_full(g_int64_hash,
g_int64_equal,
g_free, g_free);
server.umad_agent.commid2fd = g_hash_table_new_full(g_int_hash,
g_int_equal,
g_free, g_free);
}
static void hash_tbl_free(void)
{
if (server.umad_agent.commid2fd) {
g_hash_table_destroy(server.umad_agent.commid2fd);
}
if (server.umad_agent.gid2fd) {
g_hash_table_destroy(server.umad_agent.gid2fd);
}
}
static int _hash_tbl_search_fd_by_ifid(__be64 *gid_ifid)
{
int *fd;
fd = g_hash_table_lookup(server.umad_agent.gid2fd, gid_ifid);
if (!fd) {
/* Let's try IPv4 */
*gid_ifid |= 0x00000000ffff0000;
fd = g_hash_table_lookup(server.umad_agent.gid2fd, gid_ifid);
}
return fd ? *fd : 0;
}
static int hash_tbl_search_fd_by_ifid(int *fd, __be64 *gid_ifid)
{
pthread_rwlock_rdlock(&server.lock);
*fd = _hash_tbl_search_fd_by_ifid(gid_ifid);
pthread_rwlock_unlock(&server.lock);
if (!*fd) {
syslog(LOG_WARNING, "Can't find matching for ifid 0x%llx\n", *gid_ifid);
return -ENOENT;
}
return 0;
}
static int hash_tbl_search_fd_by_comm_id(uint32_t comm_id, int *fd,
__be64 *gid_idid)
{
CommId2FdEntry *fde;
pthread_rwlock_rdlock(&server.lock);
fde = g_hash_table_lookup(server.umad_agent.commid2fd, &comm_id);
pthread_rwlock_unlock(&server.lock);
if (!fde) {
syslog(LOG_WARNING, "Can't find matching for comm_id 0x%x\n", comm_id);
return -ENOENT;
}
*fd = fde->fd;
*gid_idid = fde->gid_ifid;
return 0;
}
static RdmaCmMuxErrCode add_fd_ifid_pair(int fd, __be64 gid_ifid)
{
int fd1;
pthread_rwlock_wrlock(&server.lock);
fd1 = _hash_tbl_search_fd_by_ifid(&gid_ifid);
if (fd1) { /* record already exist - an error */
pthread_rwlock_unlock(&server.lock);
return fd == fd1 ? RDMACM_MUX_ERR_CODE_EEXIST :
RDMACM_MUX_ERR_CODE_EACCES;
}
g_hash_table_insert(server.umad_agent.gid2fd, g_memdup(&gid_ifid,
sizeof(gid_ifid)), g_memdup(&fd, sizeof(fd)));
pthread_rwlock_unlock(&server.lock);
syslog(LOG_INFO, "0x%lx registered on socket %d",
be64toh((uint64_t)gid_ifid), fd);
return RDMACM_MUX_ERR_CODE_OK;
}
static RdmaCmMuxErrCode delete_fd_ifid_pair(int fd, __be64 gid_ifid)
{
int fd1;
pthread_rwlock_wrlock(&server.lock);
fd1 = _hash_tbl_search_fd_by_ifid(&gid_ifid);
if (!fd1) { /* record not exist - an error */
pthread_rwlock_unlock(&server.lock);
return RDMACM_MUX_ERR_CODE_ENOTFOUND;
}
g_hash_table_remove(server.umad_agent.gid2fd, g_memdup(&gid_ifid,
sizeof(gid_ifid)));
pthread_rwlock_unlock(&server.lock);
syslog(LOG_INFO, "0x%lx unregistered on socket %d",
be64toh((uint64_t)gid_ifid), fd);
return RDMACM_MUX_ERR_CODE_OK;
}
static void hash_tbl_save_fd_comm_id_pair(int fd, uint32_t comm_id,
uint64_t gid_ifid)
{
CommId2FdEntry fde = {fd, COMMID_TTL, gid_ifid};
pthread_rwlock_wrlock(&server.lock);
g_hash_table_insert(server.umad_agent.commid2fd,
g_memdup(&comm_id, sizeof(comm_id)),
g_memdup(&fde, sizeof(fde)));
pthread_rwlock_unlock(&server.lock);
}
static gboolean remove_old_comm_ids(gpointer key, gpointer value,
gpointer user_data)
{
CommId2FdEntry *fde = (CommId2FdEntry *)value;
return !fde->ttl--;
}
static gboolean remove_entry_from_gid2fd(gpointer key, gpointer value,
gpointer user_data)
{
if (*(int *)value == *(int *)user_data) {
syslog(LOG_INFO, "0x%lx unregistered on socket %d",
be64toh(*(uint64_t *)key), *(int *)value);
return true;
}
return false;
}
static void hash_tbl_remove_fd_ifid_pair(int fd)
{
pthread_rwlock_wrlock(&server.lock);
g_hash_table_foreach_remove(server.umad_agent.gid2fd,
remove_entry_from_gid2fd, (gpointer)&fd);
pthread_rwlock_unlock(&server.lock);
}
static int get_fd(const char *mad, int umad_len, int *fd, __be64 *gid_ifid)
{
struct umad_hdr *hdr = (struct umad_hdr *)mad;
char *data = (char *)hdr + sizeof(*hdr);
int32_t comm_id = 0;
uint16_t attr_id = be16toh(hdr->attr_id);
int rc = 0;
if (umad_len <= sizeof(*hdr)) {
rc = -EINVAL;
syslog(LOG_DEBUG, "Ignoring MAD packets with header only\n");
goto out;
}
switch (attr_id) {
case UMAD_CM_ATTR_REQ:
if (unlikely(umad_len < sizeof(*hdr) + CM_REQ_DGID_POS +
sizeof(*gid_ifid))) {
rc = -EINVAL;
syslog(LOG_WARNING,
"Invalid MAD packet size (%d) for attr_id 0x%x\n", umad_len,
attr_id);
goto out;
}
memcpy(gid_ifid, data + CM_REQ_DGID_POS, sizeof(*gid_ifid));
rc = hash_tbl_search_fd_by_ifid(fd, gid_ifid);
break;
case UMAD_CM_ATTR_SIDR_REQ:
if (unlikely(umad_len < sizeof(*hdr) + CM_SIDR_REQ_DGID_POS +
sizeof(*gid_ifid))) {
rc = -EINVAL;
syslog(LOG_WARNING,
"Invalid MAD packet size (%d) for attr_id 0x%x\n", umad_len,
attr_id);
goto out;
}
memcpy(gid_ifid, data + CM_SIDR_REQ_DGID_POS, sizeof(*gid_ifid));
rc = hash_tbl_search_fd_by_ifid(fd, gid_ifid);
break;
case UMAD_CM_ATTR_REP:
/* Fall through */
case UMAD_CM_ATTR_REJ:
/* Fall through */
case UMAD_CM_ATTR_DREQ:
/* Fall through */
case UMAD_CM_ATTR_DREP:
/* Fall through */
case UMAD_CM_ATTR_RTU:
data += sizeof(comm_id);
/* Fall through */
case UMAD_CM_ATTR_SIDR_REP:
if (unlikely(umad_len < sizeof(*hdr) + sizeof(comm_id))) {
rc = -EINVAL;
syslog(LOG_WARNING,
"Invalid MAD packet size (%d) for attr_id 0x%x\n", umad_len,
attr_id);
goto out;
}
memcpy(&comm_id, data, sizeof(comm_id));
if (comm_id) {
rc = hash_tbl_search_fd_by_comm_id(comm_id, fd, gid_ifid);
}
break;
default:
rc = -EINVAL;
syslog(LOG_WARNING, "Unsupported attr_id 0x%x\n", attr_id);
}
syslog(LOG_DEBUG, "mad_to_vm: %d 0x%x 0x%x\n", *fd, attr_id, comm_id);
out:
return rc;
}
static void *umad_recv_thread_func(void *args)
{
int rc;
RdmaCmMuxMsg msg = {};
int fd = -2;
msg.hdr.msg_type = RDMACM_MUX_MSG_TYPE_REQ;
msg.hdr.op_code = RDMACM_MUX_OP_CODE_MAD;
while (server.run) {
do {
msg.umad_len = sizeof(msg.umad.mad);
rc = umad_recv(server.umad_agent.port_id, &msg.umad, &msg.umad_len,
SLEEP_SECS * SCALE_US);
if ((rc == -EIO) || (rc == -EINVAL)) {
syslog(LOG_CRIT, "Fatal error while trying to read MAD");
}
if (rc == -ETIMEDOUT) {
g_hash_table_foreach_remove(server.umad_agent.commid2fd,
remove_old_comm_ids, NULL);
}
} while (rc && server.run);
if (server.run) {
rc = get_fd(msg.umad.mad, msg.umad_len, &fd,
&msg.hdr.sgid.global.interface_id);
if (rc) {
continue;
}
send(fd, &msg, sizeof(msg), 0);
}
}
return NULL;
}
static int read_and_process(int fd)
{
int rc;
RdmaCmMuxMsg msg = {};
struct umad_hdr *hdr;
uint32_t *comm_id = 0;
uint16_t attr_id;
rc = recv(fd, &msg, sizeof(msg), 0);
syslog(LOG_DEBUG, "Socket %d, recv %d\n", fd, rc);
if (rc < 0 && errno != EWOULDBLOCK) {
syslog(LOG_ERR, "Fail to read from socket %d\n", fd);
return -EIO;
}
if (!rc) {
syslog(LOG_ERR, "Fail to read from socket %d\n", fd);
return -EPIPE;
}
if (msg.hdr.msg_type != RDMACM_MUX_MSG_TYPE_REQ) {
syslog(LOG_WARNING, "Got non-request message (%d) from socket %d\n",
msg.hdr.msg_type, fd);
return -EPERM;
}
switch (msg.hdr.op_code) {
case RDMACM_MUX_OP_CODE_REG:
rc = add_fd_ifid_pair(fd, msg.hdr.sgid.global.interface_id);
break;
case RDMACM_MUX_OP_CODE_UNREG:
rc = delete_fd_ifid_pair(fd, msg.hdr.sgid.global.interface_id);
break;
case RDMACM_MUX_OP_CODE_MAD:
/* If this is REQ or REP then store the pair comm_id,fd to be later
* used for other messages where gid is unknown */
hdr = (struct umad_hdr *)msg.umad.mad;
attr_id = be16toh(hdr->attr_id);
if ((attr_id == UMAD_CM_ATTR_REQ) || (attr_id == UMAD_CM_ATTR_DREQ) ||
(attr_id == UMAD_CM_ATTR_SIDR_REQ) ||
(attr_id == UMAD_CM_ATTR_REP) || (attr_id == UMAD_CM_ATTR_DREP)) {
comm_id = (uint32_t *)(msg.umad.mad + sizeof(*hdr));
hash_tbl_save_fd_comm_id_pair(fd, *comm_id,
msg.hdr.sgid.global.interface_id);
}
syslog(LOG_DEBUG, "vm_to_mad: %d 0x%x 0x%x\n", fd, attr_id,
comm_id ? *comm_id : 0);
rc = umad_send(server.umad_agent.port_id, server.umad_agent.agent_id,
&msg.umad, msg.umad_len, 1, 0);
if (rc) {
syslog(LOG_ERR,
"Fail to send MAD message (0x%x) from socket %d, err=%d",
attr_id, fd, rc);
}
break;
default:
syslog(LOG_ERR, "Got invalid op_code (%d) from socket %d",
msg.hdr.msg_type, fd);
rc = RDMACM_MUX_ERR_CODE_EINVAL;
}
msg.hdr.msg_type = RDMACM_MUX_MSG_TYPE_RESP;
msg.hdr.err_code = rc;
rc = send(fd, &msg, sizeof(msg), 0);
return rc == sizeof(msg) ? 0 : -EPIPE;
}
static int accept_all(void)
{
int fd, rc = 0;
pthread_rwlock_wrlock(&server.lock);
do {
if ((server.nfds + 1) > MAX_CLIENTS) {
syslog(LOG_WARNING, "Too many clients (%d)", server.nfds);
rc = -EIO;
goto out;
}
fd = accept(server.fds[0].fd, NULL, NULL);
if (fd < 0) {
if (errno != EWOULDBLOCK) {
syslog(LOG_WARNING, "accept() failed");
rc = -EIO;
goto out;
}
break;
}
syslog(LOG_INFO, "Client connected on socket %d\n", fd);
server.fds[server.nfds].fd = fd;
server.fds[server.nfds].events = POLLIN;
server.nfds++;
} while (fd != -1);
out:
pthread_rwlock_unlock(&server.lock);
return rc;
}
static void compress_fds(void)
{
int i, j;
int closed = 0;
pthread_rwlock_wrlock(&server.lock);
for (i = 1; i < server.nfds; i++) {
if (!server.fds[i].fd) {
closed++;
for (j = i; j < server.nfds - 1; j++) {
server.fds[j] = server.fds[j + 1];
}
}
}
server.nfds -= closed;
pthread_rwlock_unlock(&server.lock);
}
static void close_fd(int idx)
{
close(server.fds[idx].fd);
syslog(LOG_INFO, "Socket %d closed\n", server.fds[idx].fd);
hash_tbl_remove_fd_ifid_pair(server.fds[idx].fd);
server.fds[idx].fd = 0;
}
static void run(void)
{
int rc, nfds, i;
bool compress = false;
syslog(LOG_INFO, "Service started");
while (server.run) {
rc = poll(server.fds, server.nfds, SLEEP_SECS * SCALE_US);
if (rc < 0) {
if (errno != EINTR) {
syslog(LOG_WARNING, "poll() failed");
}
continue;
}
if (rc == 0) {
continue;
}
nfds = server.nfds;
for (i = 0; i < nfds; i++) {
syslog(LOG_DEBUG, "pollfd[%d]: revents 0x%x, events 0x%x\n", i,
server.fds[i].revents, server.fds[i].events);
if (server.fds[i].revents == 0) {
continue;
}
if (server.fds[i].revents != POLLIN) {
if (i == 0) {
syslog(LOG_NOTICE, "Unexpected poll() event (0x%x)\n",
server.fds[i].revents);
} else {
close_fd(i);
compress = true;
}
continue;
}
if (i == 0) {
rc = accept_all();
if (rc) {
continue;
}
} else {
rc = read_and_process(server.fds[i].fd);
if (rc) {
close_fd(i);
compress = true;
}
}
}
if (compress) {
compress = false;
compress_fds();
}
}
}
static void fini_listener(void)
{
int i;
if (server.fds[0].fd <= 0) {
return;
}
for (i = server.nfds - 1; i >= 0; i--) {
if (server.fds[i].fd) {
close(server.fds[i].fd);
}
}
unlink(server.args.unix_socket_path);
}
static void fini_umad(void)
{
if (server.umad_agent.agent_id) {
umad_unregister(server.umad_agent.port_id, server.umad_agent.agent_id);
}
if (server.umad_agent.port_id) {
umad_close_port(server.umad_agent.port_id);
}
hash_tbl_free();
}
static void fini(void)
{
if (server.umad_recv_thread) {
pthread_join(server.umad_recv_thread, NULL);
server.umad_recv_thread = 0;
}
fini_umad();
fini_listener();
pthread_rwlock_destroy(&server.lock);
syslog(LOG_INFO, "Service going down");
}
static int init_listener(void)
{
struct sockaddr_un sun;
int rc, on = 1;
server.fds[0].fd = socket(AF_UNIX, SOCK_STREAM, 0);
if (server.fds[0].fd < 0) {
syslog(LOG_ALERT, "socket() failed");
return -EIO;
}
rc = setsockopt(server.fds[0].fd, SOL_SOCKET, SO_REUSEADDR, (char *)&on,
sizeof(on));
if (rc < 0) {
syslog(LOG_ALERT, "setsockopt() failed");
rc = -EIO;
goto err;
}
rc = ioctl(server.fds[0].fd, FIONBIO, (char *)&on);
if (rc < 0) {
syslog(LOG_ALERT, "ioctl() failed");
rc = -EIO;
goto err;
}
if (strlen(server.args.unix_socket_path) >= sizeof(sun.sun_path)) {
syslog(LOG_ALERT,
"Invalid unix_socket_path, size must be less than %ld\n",
sizeof(sun.sun_path));
rc = -EINVAL;
goto err;
}
sun.sun_family = AF_UNIX;
rc = snprintf(sun.sun_path, sizeof(sun.sun_path), "%s",
server.args.unix_socket_path);
if (rc < 0 || rc >= sizeof(sun.sun_path)) {
syslog(LOG_ALERT, "Could not copy unix socket path\n");
rc = -EINVAL;
goto err;
}
rc = bind(server.fds[0].fd, (struct sockaddr *)&sun, sizeof(sun));
if (rc < 0) {
syslog(LOG_ALERT, "bind() failed");
rc = -EIO;
goto err;
}
rc = listen(server.fds[0].fd, SERVER_LISTEN_BACKLOG);
if (rc < 0) {
syslog(LOG_ALERT, "listen() failed");
rc = -EIO;
goto err;
}
server.fds[0].events = POLLIN;
server.nfds = 1;
server.run = true;
return 0;
err:
close(server.fds[0].fd);
return rc;
}
static int init_umad(void)
{
long method_mask[IB_USER_MAD_LONGS_PER_METHOD_MASK];
server.umad_agent.port_id = umad_open_port(server.args.rdma_dev_name,
server.args.rdma_port_num);
if (server.umad_agent.port_id < 0) {
syslog(LOG_WARNING, "umad_open_port() failed");
return -EIO;
}
memset(&method_mask, 0, sizeof(method_mask));
method_mask[0] = MAD_METHOD_MASK0;
server.umad_agent.agent_id = umad_register(server.umad_agent.port_id,
UMAD_CLASS_CM,
UMAD_SA_CLASS_VERSION,
MAD_RMPP_VERSION, method_mask);
if (server.umad_agent.agent_id < 0) {
syslog(LOG_WARNING, "umad_register() failed");
return -EIO;
}
hash_tbl_alloc();
return 0;
}
static void signal_handler(int sig, siginfo_t *siginfo, void *context)
{
static bool warned;
/* Prevent stop if clients are connected */
if (server.nfds != 1) {
if (!warned) {
syslog(LOG_WARNING,
"Can't stop while active client exist, resend SIGINT to overid");
warned = true;
return;
}
}
if (sig == SIGINT) {
server.run = false;
fini();
}
exit(0);
}
static int init(void)
{
int rc;
struct sigaction sig = {};
rc = init_listener();
if (rc) {
return rc;
}
rc = init_umad();
if (rc) {
return rc;
}
pthread_rwlock_init(&server.lock, 0);
rc = pthread_create(&server.umad_recv_thread, NULL, umad_recv_thread_func,
NULL);
if (rc) {
syslog(LOG_ERR, "Fail to create UMAD receiver thread (%d)\n", rc);
return rc;
}
sig.sa_sigaction = &signal_handler;
sig.sa_flags = SA_SIGINFO;
rc = sigaction(SIGINT, &sig, NULL);
if (rc < 0) {
syslog(LOG_ERR, "Fail to install SIGINT handler (%d)\n", errno);
return rc;
}
return 0;
}
int main(int argc, char *argv[])
{
int rc;
memset(&server, 0, sizeof(server));
parse_args(argc, argv);
rc = init();
if (rc) {
syslog(LOG_ERR, "Fail to initialize server (%d)\n", rc);
rc = -EAGAIN;
goto out;
}
run();
out:
fini();
return rc;
}

View File

@ -1,7 +0,0 @@
if have_pvrdma
# FIXME: broken on big endian architectures
executable('rdmacm-mux', files('main.c'), genh,
dependencies: [glib, libumad],
build_by_default: false,
install: false)
endif

View File

@ -1,61 +0,0 @@
/*
* QEMU paravirtual RDMA - rdmacm-mux declarations
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef RDMACM_MUX_H
#define RDMACM_MUX_H
#include "linux/if.h"
#include <infiniband/verbs.h>
#include <infiniband/umad.h>
#include <rdma/rdma_user_cm.h>
typedef enum RdmaCmMuxMsgType {
RDMACM_MUX_MSG_TYPE_REQ = 0,
RDMACM_MUX_MSG_TYPE_RESP = 1,
} RdmaCmMuxMsgType;
typedef enum RdmaCmMuxOpCode {
RDMACM_MUX_OP_CODE_REG = 0,
RDMACM_MUX_OP_CODE_UNREG = 1,
RDMACM_MUX_OP_CODE_MAD = 2,
} RdmaCmMuxOpCode;
typedef enum RdmaCmMuxErrCode {
RDMACM_MUX_ERR_CODE_OK = 0,
RDMACM_MUX_ERR_CODE_EINVAL = 1,
RDMACM_MUX_ERR_CODE_EEXIST = 2,
RDMACM_MUX_ERR_CODE_EACCES = 3,
RDMACM_MUX_ERR_CODE_ENOTFOUND = 4,
} RdmaCmMuxErrCode;
typedef struct RdmaCmMuxHdr {
RdmaCmMuxMsgType msg_type;
RdmaCmMuxOpCode op_code;
union ibv_gid sgid;
RdmaCmMuxErrCode err_code;
} RdmaCmUHdr;
typedef struct RdmaCmUMad {
struct ib_user_mad hdr;
char mad[RDMA_MAX_PRIVATE_DATA];
} RdmaCmUMad;
typedef struct RdmaCmMuxMsg {
RdmaCmUHdr hdr;
int umad_len;
RdmaCmUMad umad;
} RdmaCmMuxMsg;
#endif

View File

@ -5,7 +5,6 @@ common_ss.add(when: 'CONFIG_HPPA_DIS', if_true: files('hppa.c'))
common_ss.add(when: 'CONFIG_M68K_DIS', if_true: files('m68k.c'))
common_ss.add(when: 'CONFIG_MICROBLAZE_DIS', if_true: files('microblaze.c'))
common_ss.add(when: 'CONFIG_MIPS_DIS', if_true: files('mips.c', 'nanomips.c'))
common_ss.add(when: 'CONFIG_NIOS2_DIS', if_true: files('nios2.c'))
common_ss.add(when: 'CONFIG_RISCV_DIS', if_true: files(
'riscv.c',
'riscv-xthead.c',

File diff suppressed because it is too large Load Diff

View File

@ -185,12 +185,6 @@ it. Since all recent x86 hardware from the past >10 years is capable of the
System emulator CPUs
--------------------
Nios II CPU (since 8.2)
'''''''''''''''''''''''
The Nios II architecture is orphan. The ``nios2`` guest CPU support is
deprecated and will be removed in a future version of QEMU.
``power5+`` and ``power7+`` CPU names (since 9.0)
'''''''''''''''''''''''''''''''''''''''''''''''''
@ -226,11 +220,6 @@ These old machine types are quite neglected nowadays and thus might have
various pitfalls with regards to live migration. Use a newer machine type
instead.
Nios II ``10m50-ghrd`` and ``nios2-generic-nommu`` machines (since 8.2)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
The Nios II architecture is orphan.
``shix`` (since 9.0)
''''''''''''''''''''
@ -376,15 +365,6 @@ recommending to switch to their stable counterparts:
- "Zve64f" should be replaced with "zve64f"
- "Zve64d" should be replaced with "zve64d"
``-device pvrdma`` and the rdma subsystem (since 8.2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The pvrdma device and the whole rdma subsystem are in a bad shape and
without active maintenance. The QEMU project intends to remove this
device and subsystem from the code base in a future release without
replacement unless somebody steps up and improves the situation.
Block device options
''''''''''''''''''''

View File

@ -58,10 +58,6 @@ depending on the guest architecture.
- :ref:`Yes<MIPS-System-emulator>`
- Yes
- Venerable RISC architecture originally out of Stanford University
* - Nios2
- Yes
- Yes
- 32 bit embedded soft-core by Altera
* - OpenRISC
- :ref:`Yes<OpenRISC-System-emulator>`
- Yes
@ -180,9 +176,6 @@ for that architecture.
* - MIPS
- System
- Unified Hosting Interface (MD01069)
* - Nios II
- System
- https://sourceware.org/git/gitweb.cgi?p=newlib-cygwin.git;a=blob;f=libgloss/nios2/nios2-semi.txt;hb=HEAD
* - RISC-V
- System and User-mode
- https://github.com/riscv/riscv-semihosting-spec/blob/main/riscv-semihosting-spec.adoc

View File

@ -757,6 +757,12 @@ x86 ``Icelake-Client`` CPU (removed in 7.1)
There isn't ever Icelake Client CPU, it is some wrong and imaginary one.
Use ``Icelake-Server`` instead.
Nios II CPU (removed in 9.1)
''''''''''''''''''''''''''''
QEMU Nios II architecture was orphan; Intel has EOL'ed the Nios II
processor IP (see `Intel discontinuance notification`_).
System accelerators
-------------------
@ -841,6 +847,11 @@ ppc ``taihu`` machine (removed in 7.2)
This machine was removed because it was partially emulated and 405
machines are very similar. Use the ``ref405ep`` machine instead.
Nios II ``10m50-ghrd`` and ``nios2-generic-nommu`` machines (removed in 9.1)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
The Nios II architecture was orphan.
linux-user mode CPUs
--------------------
@ -860,6 +871,11 @@ The ``ppc64abi32`` architecture has a number of issues which regularly
tripped up the CI testing and was suspected to be quite broken. For that
reason the maintainers strongly suspected no one actually used it.
``nios2`` CPU (removed in 9.1)
''''''''''''''''''''''''''''''
QEMU Nios II architecture was orphan; Intel has EOL'ed the Nios II
processor IP (see `Intel discontinuance notification`_).
TCG introspection features
--------------------------
@ -909,6 +925,10 @@ contains native support for this feature and thus use of the option
ROM approach was obsolete. The native SeaBIOS support can be activated
by using ``-machine graphics=off``.
``pvrdma`` and the RDMA subsystem (removed in 9.1)
''''''''''''''''''''''''''''''''''''''''''''''''''
The 'pvrdma' device and the whole RDMA subsystem have been removed.
Related binaries
----------------
@ -1006,3 +1026,4 @@ stable for some time and is now widely used.
The command line and feature set is very close to the removed
C implementation.
.. _Intel discontinuance notification: https://www.intel.com/content/www/us/en/content-details/781327/intel-is-discontinuing-ip-ordering-codes-listed-in-pdn2312-for-nios-ii-ip.html

View File

@ -1,345 +0,0 @@
Paravirtualized RDMA Device (PVRDMA)
====================================
1. Description
===============
PVRDMA is the QEMU implementation of VMware's paravirtualized RDMA device.
It works with its Linux Kernel driver AS IS, no need for any special guest
modifications.
While it complies with the VMware device, it can also communicate with bare
metal RDMA-enabled machines as peers.
It does not require an RDMA HCA in the host, it can work with Soft-RoCE (rxe).
It does not require the whole guest RAM to be pinned allowing memory
over-commit and, even if not implemented yet, migration support will be
possible with some HW assistance.
A project presentation accompany this document:
- https://blog.linuxplumbersconf.org/2017/ocw/system/presentations/4730/original/lpc-2017-pvrdma-marcel-apfelbaum-yuval-shaia.pdf
2. Setup
========
2.1 Guest setup
===============
Fedora 27+ kernels work out of the box, older distributions
require updating the kernel to 4.14 to include the pvrdma driver.
However the libpvrdma library needed by User Level Software is still
not available as part of the distributions, so the rdma-core library
needs to be compiled and optionally installed.
Please follow the instructions at:
https://github.com/linux-rdma/rdma-core.git
2.2 Host Setup
==============
The pvrdma backend is an ibdevice interface that can be exposed
either by a Soft-RoCE(rxe) device on machines with no RDMA device,
or an HCA SRIOV function(VF/PF).
Note that ibdevice interfaces can't be shared between pvrdma devices,
each one requiring a separate instance (rxe or SRIOV VF).
2.2.1 Soft-RoCE backend(rxe)
===========================
A stable version of rxe is required, Fedora 27+ or a Linux
Kernel 4.14+ is preferred.
The rdma_rxe module is part of the Linux Kernel but not loaded by default.
Install the User Level library (librxe) following the instructions from:
https://github.com/SoftRoCE/rxe-dev/wiki/rxe-dev:-Home
Associate an ETH interface with rxe by running:
rxe_cfg add eth0
An rxe0 ibdevice interface will be created and can be used as pvrdma backend.
2.2.2 RDMA device Virtual Function backend
==========================================
Nothing special is required, the pvrdma device can work not only with
Ethernet Links, but also Infinibands Links.
All is needed is an ibdevice with an active port, for Mellanox cards
will be something like mlx5_6 which can be the backend.
2.2.3 QEMU setup
================
Configure QEMU with --enable-rdma flag, installing
the required RDMA libraries.
3. Usage
========
3.1 VM Memory settings
======================
Currently the device is working only with memory backed RAM
and it must be mark as "shared":
-m 1G \
-object memory-backend-ram,id=mb1,size=1G,share \
-numa node,memdev=mb1 \
3.2 MAD Multiplexer
===================
MAD Multiplexer is a service that exposes MAD-like interface for VMs in
order to overcome the limitation where only single entity can register with
MAD layer to send and receive RDMA-CM MAD packets.
To build rdmacm-mux run
# make rdmacm-mux
Before running the rdmacm-mux make sure that both ib_cm and rdma_cm kernel
modules aren't loaded, otherwise the rdmacm-mux service will fail to start.
The application accepts 3 command line arguments and exposes a UNIX socket
to pass control and data to it.
-d rdma-device-name Name of RDMA device to register with
-s unix-socket-path Path to unix socket to listen (default /var/run/rdmacm-mux)
-p rdma-device-port Port number of RDMA device to register with (default 1)
The final UNIX socket file name is a concatenation of the 3 arguments so
for example for device mlx5_0 on port 2 this /var/run/rdmacm-mux-mlx5_0-2
will be created.
pvrdma requires this service.
Please refer to contrib/rdmacm-mux for more details.
3.3 Service exposed by libvirt daemon
=====================================
The control over the RDMA device's GID table is done by updating the
device's Ethernet function addresses.
Usually the first GID entry is determined by the MAC address, the second by
the first IPv6 address and the third by the IPv4 address. Other entries can
be added by adding more IP addresses. The opposite is the same, i.e.
whenever an address is removed, the corresponding GID entry is removed.
The process is done by the network and RDMA stacks. Whenever an address is
added the ib_core driver is notified and calls the device driver add_gid
function which in turn update the device.
To support this in pvrdma device the device hooks into the create_bind and
destroy_bind HW commands triggered by pvrdma driver in guest.
Whenever changed is made to the pvrdma port's GID table a special QMP
messages is sent to be processed by libvirt to update the address of the
backend Ethernet device.
pvrdma requires that libvirt service will be up.
3.4 PCI devices settings
========================
RoCE device exposes two functions - an Ethernet and RDMA.
To support it, pvrdma device is composed of two PCI functions, an Ethernet
device of type vmxnet3 on PCI slot 0 and a PVRDMA device on PCI slot 1. The
Ethernet function can be used for other Ethernet purposes such as IP.
3.5 Device parameters
=====================
- netdev: Specifies the Ethernet device function name on the host for
example enp175s0f0. For Soft-RoCE device (rxe) this would be the Ethernet
device used to create it.
- ibdev: The IB device name on host for example rxe0, mlx5_0 etc.
- mad-chardev: The name of the MAD multiplexer char device.
- ibport: In case of multi-port device (such as Mellanox's HCA) this
specify the port to use. If not set 1 will be used.
- dev-caps-max-mr-size: The maximum size of MR.
- dev-caps-max-qp: Maximum number of QPs.
- dev-caps-max-cq: Maximum number of CQs.
- dev-caps-max-mr: Maximum number of MRs.
- dev-caps-max-pd: Maximum number of PDs.
- dev-caps-max-ah: Maximum number of AHs.
Notes:
- The first 3 parameters are mandatory settings, the rest have their
defaults.
- The last 8 parameters (the ones that prefixed by dev-caps) defines the top
limits but the final values is adjusted by the backend device limitations.
- netdev can be extracted from ibdev's sysfs
(/sys/class/infiniband/<ibdev>/device/net/)
3.6 Example
===========
Define bridge device with vmxnet3 network backend:
<interface type='bridge'>
<mac address='56:b4:44:e9:62:dc'/>
<source bridge='bridge1'/>
<model type='vmxnet3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0' multifunction='on'/>
</interface>
Define pvrdma device:
<qemu:commandline>
<qemu:arg value='-object'/>
<qemu:arg value='memory-backend-ram,id=mb1,size=1G,share'/>
<qemu:arg value='-numa'/>
<qemu:arg value='node,memdev=mb1'/>
<qemu:arg value='-chardev'/>
<qemu:arg value='socket,path=/var/run/rdmacm-mux-rxe0-1,id=mads'/>
<qemu:arg value='-device'/>
<qemu:arg value='pvrdma,addr=10.1,ibdev=rxe0,netdev=bridge0,mad-chardev=mads'/>
</qemu:commandline>
4. Implementation details
=========================
4.1 Overview
============
The device acts like a proxy between the Guest Driver and the host
ibdevice interface.
On configuration path:
- For every hardware resource request (PD/QP/CQ/...) the pvrdma will request
a resource from the backend interface, maintaining a 1-1 mapping
between the guest and host.
On data path:
- Every post_send/receive received from the guest will be converted into
a post_send/receive for the backend. The buffers data will not be touched
or copied resulting in near bare-metal performance for large enough buffers.
- Completions from the backend interface will result in completions for
the pvrdma device.
4.2 PCI BARs
============
PCI Bars:
BAR 0 - MSI-X
MSI-X vectors:
(0) Command - used when execution of a command is completed.
(1) Async - not in use.
(2) Completion - used when a completion event is placed in
device's CQ ring.
BAR 1 - Registers
--------------------------------------------------------
| VERSION | DSR | CTL | REQ | ERR | ICR | IMR | MAC |
--------------------------------------------------------
DSR - Address of driver/device shared memory used
for the command channel, used for passing:
- General info such as driver version
- Address of 'command' and 'response'
- Address of async ring
- Address of device's CQ ring
- Device capabilities
CTL - Device control operations (activate, reset etc)
IMG - Set interrupt mask
REQ - Command execution register
ERR - Operation status
BAR 2 - UAR
---------------------------------------------------------
| QP_NUM | SEND/RECV Flag || CQ_NUM | ARM/POLL Flag |
---------------------------------------------------------
- Offset 0 used for QP operations (send and recv)
- Offset 4 used for CQ operations (arm and poll)
4.3 Major flows
===============
4.3.1 Create CQ
===============
- Guest driver
- Allocates pages for CQ ring
- Creates page directory (pdir) to hold CQ ring's pages
- Initializes CQ ring
- Initializes 'Create CQ' command object (cqe, pdir etc)
- Copies the command to 'command' address
- Writes 0 into REQ register
- Device
- Reads the request object from the 'command' address
- Allocates CQ object and initialize CQ ring based on pdir
- Creates the backend CQ
- Writes operation status to ERR register
- Posts command-interrupt to guest
- Guest driver
- Reads the HW response code from ERR register
4.3.2 Create QP
===============
- Guest driver
- Allocates pages for send and receive rings
- Creates page directory(pdir) to hold the ring's pages
- Initializes 'Create QP' command object (max_send_wr,
send_cq_handle, recv_cq_handle, pdir etc)
- Copies the object to 'command' address
- Write 0 into REQ register
- Device
- Reads the request object from 'command' address
- Allocates the QP object and initialize
- Send and recv rings based on pdir
- Send and recv ring state
- Creates the backend QP
- Writes the operation status to ERR register
- Posts command-interrupt to guest
- Guest driver
- Reads the HW response code from ERR register
4.3.3 Post receive
==================
- Guest driver
- Initializes a wqe and place it on recv ring
- Write to qpn|qp_recv_bit (31) to QP offset in UAR
- Device
- Extracts qpn from UAR
- Walks through the ring and does the following for each wqe
- Prepares the backend CQE context to be used when
receiving completion from backend (wr_id, op_code, emu_cq_num)
- For each sge prepares backend sge
- Calls backend's post_recv
4.3.4 Process backend events
============================
- Done by a dedicated thread used to process backend events;
at initialization is attached to the device and creates
the communication channel.
- Thread main loop:
- Polls for completions
- Extracts QEMU _cq_num, wr_id and op_code from context
- Writes CQE to CQ ring
- Writes CQ number to device CQ
- Sends completion-interrupt to guest
- Deallocates context
- Acks the event to backend
5. Limitations
==============
- The device obviously is limited by the Guest Linux Driver features implementation
of the VMware device API.
- Memory registration mechanism requires mremap for every page in the buffer in order
to map it to a contiguous virtual address range. Since this is not the data path
it should not matter much. If the default max mr size is increased, be aware that
memory registration can take up to 0.5 seconds for 1GB of memory.
- The device requires target page size to be the same as the host page size,
otherwise it will fail to init.
- QEMU cannot map guest RAM from a file descriptor if a pvrdma device is attached,
so it can't work with huge pages. The limitation will be addressed in the future,
however QEMU allocates Guest RAM with MADV_HUGEPAGE so if there are enough huge
pages available, QEMU will use them. QEMU will fail to init if the requirements
are not met.
6. Performance
==============
By design the pvrdma device exits on each post-send/receive, so for small buffers
the performance is affected; however for medium buffers it will became close to
bare metal and from 1MB buffers and up it reaches bare metal performance.
(tested with 2 VMs, the pvrdma devices connected to 2 VFs of the same device)
All the above assumes no memory registration is done on data path.

View File

@ -87,8 +87,8 @@ These are specified using a special URL syntax.
``GlusterFS``
GlusterFS is a user space distributed file system. QEMU supports the
use of GlusterFS volumes for hosting VM disk images using TCP, Unix
Domain Sockets and RDMA transport protocols.
use of GlusterFS volumes for hosting VM disk images using TCP and Unix
Domain Sockets transport protocols.
Syntax for specifying a VM disk image on GlusterFS volume is

View File

@ -39,7 +39,7 @@ can be accessed by following steps.
.. code-block:: bash
./configure --disable-rdma --disable-pvrdma --prefix=/usr \
./configure --disable-rdma --prefix=/usr \
--target-list="loongarch64-softmmu" \
--disable-libiscsi --disable-libnfs --disable-libpmem \
--disable-glusterfs --enable-libusb --enable-usb-redir \

View File

@ -737,7 +737,6 @@ Examples
|qemu_system| -drive file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
|qemu_system| -drive file=gluster+tcp://server.domain.com:24007/testvol/dir/a.img
|qemu_system| -drive file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
|qemu_system| -drive file=gluster+rdma://1.2.3.4:24007/testvol/a.img
|qemu_system| -drive file=gluster://1.2.3.4/testvol/a.img,file.debug=9,file.logfile=/var/log/qemu-gluster.log
|qemu_system| 'json:{"driver":"qcow2",
"file":{"driver":"gluster",

View File

@ -24,7 +24,7 @@ Deterministic replay has the following features:
* Writes execution log into the file for later replaying for multiple times
on different machines.
* Supports i386, x86_64, ARM, AArch64, Risc-V, MIPS, MIPS64, S390X, Alpha,
PowerPC, PowerPC64, M68000, Microblaze, OpenRISC, Nios II, SPARC,
PowerPC, PowerPC64, M68000, Microblaze, OpenRISC, SPARC,
and Xtensa hardware platforms.
* Performs deterministic replay of all operations with keyboard and mouse
input devices, serial ports, and network.

View File

@ -159,10 +159,6 @@ Other binaries
* ``qemu-mipsn32el`` executes 32-bit little endian MIPS binaries (MIPS N32
ABI).
- user mode (NiosII)
* ``qemu-nios2`` TODO.
- user mode (PowerPC)
* ``qemu-ppc64`` TODO.

View File

@ -152,7 +152,7 @@ static void parts64_default_nan(FloatParts64 *p, float_status *status)
/*
* This case is true for Alpha, ARM, MIPS, OpenRISC, PPC, RISC-V,
* S390, SH4, TriCore, and Xtensa. Our other supported targets,
* CRIS and Nios2, do not have floating-point.
* such CRIS, do not have floating-point.
*/
if (snan_bit_is_one(status)) {
/* set all bits other than msb */

View File

@ -182,19 +182,6 @@ SRST
Show PIC state.
ERST
{
.name = "rdma",
.args_type = "",
.params = "",
.help = "show RDMA state",
.cmd_info_hrt = qmp_x_query_rdma,
},
SRST
``info rdma``
Show RDMA state.
ERST
{
.name = "pci",
.args_type = "",

View File

@ -29,7 +29,6 @@ source pci-bridge/Kconfig
source pci-host/Kconfig
source pcmcia/Kconfig
source pci/Kconfig
source rdma/Kconfig
source remote/Kconfig
source rtc/Kconfig
source scsi/Kconfig
@ -57,7 +56,6 @@ source loongarch/Kconfig
source m68k/Kconfig
source microblaze/Kconfig
source mips/Kconfig
source nios2/Kconfig
source openrisc/Kconfig
source ppc/Kconfig
source riscv/Kconfig

View File

@ -12,7 +12,6 @@
#include "hw/boards.h"
#include "hw/intc/intc.h"
#include "hw/mem/memory-device.h"
#include "hw/rdma/rdma.h"
#include "qapi/error.h"
#include "qapi/qapi-builtin-visit.h"
#include "qapi/qapi-commands-machine.h"
@ -291,37 +290,6 @@ MemoryInfo *qmp_query_memory_size_summary(Error **errp)
return mem_info;
}
static int qmp_x_query_rdma_foreach(Object *obj, void *opaque)
{
RdmaProvider *rdma;
RdmaProviderClass *k;
GString *buf = opaque;
if (object_dynamic_cast(obj, INTERFACE_RDMA_PROVIDER)) {
rdma = RDMA_PROVIDER(obj);
k = RDMA_PROVIDER_GET_CLASS(obj);
if (k->format_statistics) {
k->format_statistics(rdma, buf);
} else {
g_string_append_printf(buf,
"RDMA statistics not available for %s.\n",
object_get_typename(obj));
}
}
return 0;
}
HumanReadableText *qmp_x_query_rdma(Error **errp)
{
g_autoptr(GString) buf = g_string_new("");
object_child_foreach_recursive(object_get_root(),
qmp_x_query_rdma_foreach, buf);
return human_readable_text_from_str(buf);
}
HumanReadableText *qmp_x_query_ramblock(Error **errp)
{
g_autoptr(GString) buf = ram_block_format();

View File

@ -87,9 +87,6 @@ config GOLDFISH_PIC
config M68K_IRQC
bool
config NIOS2_VIC
bool
config LOONGARCH_IPI
bool

View File

@ -68,7 +68,6 @@ specific_ss.add(when: 'CONFIG_XIVE', if_true: files('xive.c'))
specific_ss.add(when: ['CONFIG_KVM', 'CONFIG_XIVE'],
if_true: files('spapr_xive_kvm.c'))
specific_ss.add(when: 'CONFIG_M68K_IRQC', if_true: files('m68k_irqc.c'))
specific_ss.add(when: 'CONFIG_NIOS2_VIC', if_true: files('nios2_vic.c'))
specific_ss.add(when: 'CONFIG_LOONGARCH_IPI', if_true: files('loongarch_ipi.c'))
specific_ss.add(when: 'CONFIG_LOONGARCH_PCH_PIC', if_true: files('loongarch_pch_pic.c'))
specific_ss.add(when: 'CONFIG_LOONGARCH_PCH_MSI', if_true: files('loongarch_pch_msi.c'))

View File

@ -1,313 +0,0 @@
/*
* Vectored Interrupt Controller for nios2 processor
*
* Copyright (c) 2022 Neuroblade
*
* Interface:
* QOM property "cpu": link to the Nios2 CPU (must be set)
* Unnamed GPIO inputs 0..NIOS2_VIC_MAX_IRQ-1: input IRQ lines
* IRQ should be connected to nios2 IRQ0.
*
* Reference: "Embedded Peripherals IP User Guide
* for Intel® Quartus® Prime Design Suite: 21.4"
* Chapter 38 "Vectored Interrupt Controller Core"
* See: https://www.intel.com/content/www/us/en/docs/programmable/683130/21-4/vectored-interrupt-controller-core.html
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "hw/irq.h"
#include "hw/qdev-properties.h"
#include "hw/sysbus.h"
#include "migration/vmstate.h"
#include "qapi/error.h"
#include "qemu/bitops.h"
#include "qemu/log.h"
#include "qom/object.h"
#include "hw/intc/nios2_vic.h"
#include "cpu.h"
enum {
INT_CONFIG0 = 0,
INT_CONFIG31 = 31,
INT_ENABLE = 32,
INT_ENABLE_SET = 33,
INT_ENABLE_CLR = 34,
INT_PENDING = 35,
INT_RAW_STATUS = 36,
SW_INTERRUPT = 37,
SW_INTERRUPT_SET = 38,
SW_INTERRUPT_CLR = 39,
VIC_CONFIG = 40,
VIC_STATUS = 41,
VEC_TBL_BASE = 42,
VEC_TBL_ADDR = 43,
CSR_COUNT /* Last! */
};
/* Requested interrupt level (INT_CONFIG[0:5]) */
static inline uint32_t vic_int_config_ril(const Nios2VIC *vic, int irq_num)
{
return extract32(vic->int_config[irq_num], 0, 6);
}
/* Requested NMI (INT_CONFIG[6]) */
static inline uint32_t vic_int_config_rnmi(const Nios2VIC *vic, int irq_num)
{
return extract32(vic->int_config[irq_num], 6, 1);
}
/* Requested register set (INT_CONFIG[7:12]) */
static inline uint32_t vic_int_config_rrs(const Nios2VIC *vic, int irq_num)
{
return extract32(vic->int_config[irq_num], 7, 6);
}
static inline uint32_t vic_config_vec_size(const Nios2VIC *vic)
{
return 1 << (2 + extract32(vic->vic_config, 0, 3));
}
static inline uint32_t vic_int_pending(const Nios2VIC *vic)
{
return (vic->int_raw_status | vic->sw_int) & vic->int_enable;
}
static void vic_update_irq(Nios2VIC *vic)
{
Nios2CPU *cpu = NIOS2_CPU(vic->cpu);
uint32_t pending = vic_int_pending(vic);
int irq = -1;
int max_ril = 0;
/* Note that if RIL is 0 for an interrupt it is effectively disabled */
vic->vec_tbl_addr = 0;
vic->vic_status = 0;
if (pending == 0) {
qemu_irq_lower(vic->output_int);
return;
}
for (int i = 0; i < NIOS2_VIC_MAX_IRQ; i++) {
if (pending & BIT(i)) {
int ril = vic_int_config_ril(vic, i);
if (ril > max_ril) {
irq = i;
max_ril = ril;
}
}
}
if (irq < 0) {
qemu_irq_lower(vic->output_int);
return;
}
vic->vec_tbl_addr = irq * vic_config_vec_size(vic) + vic->vec_tbl_base;
vic->vic_status = irq | BIT(31);
/*
* In hardware, the interface between the VIC and the CPU is via the
* External Interrupt Controller interface, where the interrupt controller
* presents the CPU with a packet of data containing:
* - Requested Handler Address (RHA): 32 bits
* - Requested Register Set (RRS) : 6 bits
* - Requested Interrupt Level (RIL) : 6 bits
* - Requested NMI flag (RNMI) : 1 bit
* In our emulation, we implement this by writing the data directly to
* fields in the CPU object and then raising the IRQ line to tell
* the CPU that we've done so.
*/
cpu->rha = vic->vec_tbl_addr;
cpu->ril = max_ril;
cpu->rrs = vic_int_config_rrs(vic, irq);
cpu->rnmi = vic_int_config_rnmi(vic, irq);
qemu_irq_raise(vic->output_int);
}
static void vic_set_irq(void *opaque, int irq_num, int level)
{
Nios2VIC *vic = opaque;
vic->int_raw_status = deposit32(vic->int_raw_status, irq_num, 1, !!level);
vic_update_irq(vic);
}
static void nios2_vic_reset(DeviceState *dev)
{
Nios2VIC *vic = NIOS2_VIC(dev);
memset(&vic->int_config, 0, sizeof(vic->int_config));
vic->vic_config = 0;
vic->int_raw_status = 0;
vic->int_enable = 0;
vic->sw_int = 0;
vic->vic_status = 0;
vic->vec_tbl_base = 0;
vic->vec_tbl_addr = 0;
}
static uint64_t nios2_vic_csr_read(void *opaque, hwaddr offset, unsigned size)
{
Nios2VIC *vic = opaque;
int index = offset / 4;
switch (index) {
case INT_CONFIG0 ... INT_CONFIG31:
return vic->int_config[index - INT_CONFIG0];
case INT_ENABLE:
return vic->int_enable;
case INT_PENDING:
return vic_int_pending(vic);
case INT_RAW_STATUS:
return vic->int_raw_status;
case SW_INTERRUPT:
return vic->sw_int;
case VIC_CONFIG:
return vic->vic_config;
case VIC_STATUS:
return vic->vic_status;
case VEC_TBL_BASE:
return vic->vec_tbl_base;
case VEC_TBL_ADDR:
return vic->vec_tbl_addr;
default:
return 0;
}
}
static void nios2_vic_csr_write(void *opaque, hwaddr offset, uint64_t value,
unsigned size)
{
Nios2VIC *vic = opaque;
int index = offset / 4;
switch (index) {
case INT_CONFIG0 ... INT_CONFIG31:
vic->int_config[index - INT_CONFIG0] = value;
break;
case INT_ENABLE:
vic->int_enable = value;
break;
case INT_ENABLE_SET:
vic->int_enable |= value;
break;
case INT_ENABLE_CLR:
vic->int_enable &= ~value;
break;
case SW_INTERRUPT:
vic->sw_int = value;
break;
case SW_INTERRUPT_SET:
vic->sw_int |= value;
break;
case SW_INTERRUPT_CLR:
vic->sw_int &= ~value;
break;
case VIC_CONFIG:
vic->vic_config = value;
break;
case VEC_TBL_BASE:
vic->vec_tbl_base = value;
break;
default:
qemu_log_mask(LOG_GUEST_ERROR,
"nios2-vic: write to invalid CSR address %#"
HWADDR_PRIx "\n", offset);
}
vic_update_irq(vic);
}
static const MemoryRegionOps nios2_vic_csr_ops = {
.read = nios2_vic_csr_read,
.write = nios2_vic_csr_write,
.endianness = DEVICE_LITTLE_ENDIAN,
.valid = { .min_access_size = 4, .max_access_size = 4 }
};
static void nios2_vic_realize(DeviceState *dev, Error **errp)
{
Nios2VIC *vic = NIOS2_VIC(dev);
if (!vic->cpu) {
/* This is a programming error in the code using this device */
error_setg(errp, "nios2-vic 'cpu' link property was not set");
return;
}
sysbus_init_irq(SYS_BUS_DEVICE(dev), &vic->output_int);
qdev_init_gpio_in(dev, vic_set_irq, NIOS2_VIC_MAX_IRQ);
memory_region_init_io(&vic->csr, OBJECT(dev), &nios2_vic_csr_ops, vic,
"nios2.vic.csr", CSR_COUNT * sizeof(uint32_t));
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &vic->csr);
}
static Property nios2_vic_properties[] = {
DEFINE_PROP_LINK("cpu", Nios2VIC, cpu, TYPE_CPU, CPUState *),
DEFINE_PROP_END_OF_LIST()
};
static const VMStateDescription nios2_vic_vmstate = {
.name = "nios2-vic",
.version_id = 1,
.minimum_version_id = 1,
.fields = (const VMStateField[]){
VMSTATE_UINT32_ARRAY(int_config, Nios2VIC, 32),
VMSTATE_UINT32(vic_config, Nios2VIC),
VMSTATE_UINT32(int_raw_status, Nios2VIC),
VMSTATE_UINT32(int_enable, Nios2VIC),
VMSTATE_UINT32(sw_int, Nios2VIC),
VMSTATE_UINT32(vic_status, Nios2VIC),
VMSTATE_UINT32(vec_tbl_base, Nios2VIC),
VMSTATE_UINT32(vec_tbl_addr, Nios2VIC),
VMSTATE_END_OF_LIST()
},
};
static void nios2_vic_class_init(ObjectClass *klass, void *data)
{
DeviceClass *dc = DEVICE_CLASS(klass);
dc->reset = nios2_vic_reset;
dc->realize = nios2_vic_realize;
dc->vmsd = &nios2_vic_vmstate;
device_class_set_props(dc, nios2_vic_properties);
}
static const TypeInfo nios2_vic_info = {
.name = TYPE_NIOS2_VIC,
.parent = TYPE_SYS_BUS_DEVICE,
.instance_size = sizeof(Nios2VIC),
.class_init = nios2_vic_class_init,
};
static void nios2_vic_register_types(void)
{
type_register_static(&nios2_vic_info);
}
type_init(nios2_vic_register_types);

View File

@ -28,7 +28,6 @@ subdir('pci')
subdir('pci-bridge')
subdir('pci-host')
subdir('pcmcia')
subdir('rdma')
subdir('rtc')
subdir('scsi')
subdir('sd')
@ -56,7 +55,6 @@ subdir('loongarch')
subdir('m68k')
subdir('microblaze')
subdir('mips')
subdir('nios2')
subdir('openrisc')
subdir('ppc')
subdir('remote')

View File

@ -1,181 +0,0 @@
/*
* Altera 10M50 Nios2 GHRD
*
* Copyright (c) 2016 Marek Vasut <marek.vasut@gmail.com>
*
* Based on LabX device code
*
* Copyright (c) 2012 Chris Wulff <crwulff@gmail.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see
* <http://www.gnu.org/licenses/lgpl-2.1.html>
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "hw/sysbus.h"
#include "hw/char/serial.h"
#include "hw/intc/nios2_vic.h"
#include "hw/qdev-properties.h"
#include "sysemu/sysemu.h"
#include "hw/boards.h"
#include "exec/memory.h"
#include "exec/address-spaces.h"
#include "qemu/config-file.h"
#include "boot.h"
struct Nios2MachineState {
MachineState parent_obj;
MemoryRegion phys_tcm;
MemoryRegion phys_tcm_alias;
MemoryRegion phys_ram;
MemoryRegion phys_ram_alias;
bool vic;
};
#define TYPE_NIOS2_MACHINE MACHINE_TYPE_NAME("10m50-ghrd")
OBJECT_DECLARE_TYPE(Nios2MachineState, MachineClass, NIOS2_MACHINE)
#define BINARY_DEVICE_TREE_FILE "10m50-devboard.dtb"
static void nios2_10m50_ghrd_init(MachineState *machine)
{
Nios2MachineState *nms = NIOS2_MACHINE(machine);
Nios2CPU *cpu;
DeviceState *dev;
MemoryRegion *address_space_mem = get_system_memory();
ram_addr_t tcm_base = 0x0;
ram_addr_t tcm_size = 0x1000; /* 1 kiB, but QEMU limit is 4 kiB */
ram_addr_t ram_base = 0x08000000;
ram_addr_t ram_size = 0x08000000;
qemu_irq irq[32];
int i;
/* Physical TCM (tb_ram_1k) with alias at 0xc0000000 */
memory_region_init_ram(&nms->phys_tcm, NULL, "nios2.tcm", tcm_size,
&error_abort);
memory_region_init_alias(&nms->phys_tcm_alias, NULL, "nios2.tcm.alias",
&nms->phys_tcm, 0, tcm_size);
memory_region_add_subregion(address_space_mem, tcm_base, &nms->phys_tcm);
memory_region_add_subregion(address_space_mem, 0xc0000000 + tcm_base,
&nms->phys_tcm_alias);
/* Physical DRAM with alias at 0xc0000000 */
memory_region_init_ram(&nms->phys_ram, NULL, "nios2.ram", ram_size,
&error_abort);
memory_region_init_alias(&nms->phys_ram_alias, NULL, "nios2.ram.alias",
&nms->phys_ram, 0, ram_size);
memory_region_add_subregion(address_space_mem, ram_base, &nms->phys_ram);
memory_region_add_subregion(address_space_mem, 0xc0000000 + ram_base,
&nms->phys_ram_alias);
/* Create CPU. We need to set eic_present between init and realize. */
cpu = NIOS2_CPU(object_new(TYPE_NIOS2_CPU));
/* Enable the External Interrupt Controller within the CPU. */
cpu->eic_present = nms->vic;
/* Configure new exception vectors. */
cpu->reset_addr = 0xd4000000;
cpu->exception_addr = 0xc8000120;
cpu->fast_tlb_miss_addr = 0xc0000100;
qdev_realize_and_unref(DEVICE(cpu), NULL, &error_fatal);
if (nms->vic) {
dev = qdev_new(TYPE_NIOS2_VIC);
MemoryRegion *dev_mr;
qemu_irq cpu_irq;
object_property_set_link(OBJECT(dev), "cpu", OBJECT(cpu), &error_fatal);
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
cpu_irq = qdev_get_gpio_in_named(DEVICE(cpu), "EIC", 0);
sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, cpu_irq);
for (i = 0; i < 32; i++) {
irq[i] = qdev_get_gpio_in(dev, i);
}
dev_mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 0);
memory_region_add_subregion(address_space_mem, 0x18002000, dev_mr);
} else {
for (i = 0; i < 32; i++) {
irq[i] = qdev_get_gpio_in_named(DEVICE(cpu), "IRQ", i);
}
}
/* Register: Altera 16550 UART */
serial_mm_init(address_space_mem, 0xf8001600, 2, irq[1], 115200,
serial_hd(0), DEVICE_NATIVE_ENDIAN);
/* Register: Timer sys_clk_timer */
dev = qdev_new("ALTR.timer");
qdev_prop_set_uint32(dev, "clock-frequency", 75 * 1000000);
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, 0xf8001440);
sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, irq[0]);
/* Register: Timer sys_clk_timer_1 */
dev = qdev_new("ALTR.timer");
qdev_prop_set_uint32(dev, "clock-frequency", 75 * 1000000);
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, 0xe0000880);
sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, irq[5]);
nios2_load_kernel(cpu, ram_base, ram_size, machine->initrd_filename,
BINARY_DEVICE_TREE_FILE, NULL);
}
static bool get_vic(Object *obj, Error **errp)
{
Nios2MachineState *nms = NIOS2_MACHINE(obj);
return nms->vic;
}
static void set_vic(Object *obj, bool value, Error **errp)
{
Nios2MachineState *nms = NIOS2_MACHINE(obj);
nms->vic = value;
}
static void nios2_10m50_ghrd_class_init(ObjectClass *oc, void *data)
{
MachineClass *mc = MACHINE_CLASS(oc);
mc->desc = "Altera 10M50 GHRD Nios II design";
mc->init = nios2_10m50_ghrd_init;
mc->is_default = true;
mc->deprecation_reason = "Nios II architecture is deprecated";
object_class_property_add_bool(oc, "vic", get_vic, set_vic);
object_class_property_set_description(oc, "vic",
"Set on/off to enable/disable the Vectored Interrupt Controller");
}
static const TypeInfo nios2_10m50_ghrd_type_info = {
.name = TYPE_NIOS2_MACHINE,
.parent = TYPE_MACHINE,
.instance_size = sizeof(Nios2MachineState),
.class_init = nios2_10m50_ghrd_class_init,
};
static void nios2_10m50_ghrd_type_init(void)
{
type_register_static(&nios2_10m50_ghrd_type_info);
}
type_init(nios2_10m50_ghrd_type_init);

View File

@ -1,13 +0,0 @@
config NIOS2_10M50
bool
select NIOS2
select SERIAL
select ALTERA_TIMER
select NIOS2_VIC
config NIOS2_GENERIC_NOMMU
bool
select NIOS2
config NIOS2
bool

View File

@ -1,234 +0,0 @@
/*
* Nios2 kernel loader
*
* Copyright (c) 2016 Marek Vasut <marek.vasut@gmail.com>
*
* Based on microblaze kernel loader
*
* Copyright (c) 2012 Peter Crosthwaite <peter.crosthwaite@petalogix.com>
* Copyright (c) 2012 PetaLogix
* Copyright (c) 2009 Edgar E. Iglesias.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu/osdep.h"
#include "qemu/units.h"
#include "qemu/datadir.h"
#include "qemu/option.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
#include "qemu/guest-random.h"
#include "sysemu/device_tree.h"
#include "sysemu/reset.h"
#include "hw/boards.h"
#include "hw/loader.h"
#include "elf.h"
#include "boot.h"
#include <libfdt.h>
#define NIOS2_MAGIC 0x534f494e
static struct nios2_boot_info {
void (*machine_cpu_reset)(Nios2CPU *);
uint32_t bootstrap_pc;
uint32_t cmdline;
uint32_t initrd_start;
uint32_t initrd_end;
uint32_t fdt;
} boot_info;
static void main_cpu_reset(void *opaque)
{
Nios2CPU *cpu = opaque;
CPUState *cs = CPU(cpu);
CPUNios2State *env = &cpu->env;
cpu_reset(CPU(cpu));
env->regs[R_ARG0] = NIOS2_MAGIC;
env->regs[R_ARG1] = boot_info.initrd_start;
env->regs[R_ARG2] = boot_info.fdt;
env->regs[R_ARG3] = boot_info.cmdline;
cpu_set_pc(cs, boot_info.bootstrap_pc);
if (boot_info.machine_cpu_reset) {
boot_info.machine_cpu_reset(cpu);
}
}
static uint64_t translate_kernel_address(void *opaque, uint64_t addr)
{
return addr - 0xc0000000LL;
}
static int nios2_load_dtb(struct nios2_boot_info bi, const uint32_t ramsize,
const char *kernel_cmdline, const char *dtb_filename)
{
MachineState *machine = MACHINE(qdev_get_machine());
int fdt_size;
void *fdt = NULL;
int r;
uint8_t rng_seed[32];
if (dtb_filename) {
fdt = load_device_tree(dtb_filename, &fdt_size);
}
if (!fdt) {
return 0;
}
qemu_guest_getrandom_nofail(rng_seed, sizeof(rng_seed));
qemu_fdt_setprop(fdt, "/chosen", "rng-seed", rng_seed, sizeof(rng_seed));
if (kernel_cmdline) {
r = qemu_fdt_setprop_string(fdt, "/chosen", "bootargs",
kernel_cmdline);
if (r < 0) {
fprintf(stderr, "couldn't set /chosen/bootargs\n");
}
}
if (bi.initrd_start) {
qemu_fdt_setprop_cell(fdt, "/chosen", "linux,initrd-start",
translate_kernel_address(NULL, bi.initrd_start));
qemu_fdt_setprop_cell(fdt, "/chosen", "linux,initrd-end",
translate_kernel_address(NULL, bi.initrd_end));
}
cpu_physical_memory_write(bi.fdt, fdt, fdt_size);
/* Set machine->fdt for 'dumpdtb' QMP/HMP command */
machine->fdt = fdt;
return fdt_size;
}
void nios2_load_kernel(Nios2CPU *cpu, hwaddr ddr_base,
uint32_t ramsize,
const char *initrd_filename,
const char *dtb_filename,
void (*machine_cpu_reset)(Nios2CPU *))
{
const char *kernel_filename;
const char *kernel_cmdline;
const char *dtb_arg;
char *filename = NULL;
kernel_filename = current_machine->kernel_filename;
kernel_cmdline = current_machine->kernel_cmdline;
dtb_arg = current_machine->dtb;
/* default to pcbios dtb as passed by machine_init */
if (!dtb_arg) {
filename = qemu_find_file(QEMU_FILE_TYPE_BIOS, dtb_filename);
}
boot_info.machine_cpu_reset = machine_cpu_reset;
qemu_register_reset(main_cpu_reset, cpu);
if (kernel_filename) {
int kernel_size, fdt_size;
uint64_t entry, high;
/* Boots a kernel elf binary. */
kernel_size = load_elf(kernel_filename, NULL, NULL, NULL,
&entry, NULL, &high, NULL,
TARGET_BIG_ENDIAN, EM_ALTERA_NIOS2, 0, 0);
if ((uint32_t)entry == 0xc0000000) {
/*
* The Nios II processor reference guide documents that the
* kernel is placed at virtual memory address 0xc0000000,
* and we've got something that points there. Reload it
* and adjust the entry to get the address in physical RAM.
*/
kernel_size = load_elf(kernel_filename, NULL,
translate_kernel_address, NULL,
&entry, NULL, NULL, NULL,
TARGET_BIG_ENDIAN, EM_ALTERA_NIOS2, 0, 0);
boot_info.bootstrap_pc = ddr_base + 0xc0000000 +
(entry & 0x07ffffff);
} else {
/* Use the entry point in the ELF image. */
boot_info.bootstrap_pc = (uint32_t)entry;
}
/* If it wasn't an ELF image, try an u-boot image. */
if (kernel_size < 0) {
hwaddr uentry, loadaddr = LOAD_UIMAGE_LOADADDR_INVALID;
kernel_size = load_uimage(kernel_filename, &uentry, &loadaddr, 0,
NULL, NULL);
boot_info.bootstrap_pc = uentry;
high = loadaddr + kernel_size;
}
/* Not an ELF image nor an u-boot image, try a RAW image. */
if (kernel_size < 0) {
kernel_size = load_image_targphys(kernel_filename, ddr_base,
ramsize);
boot_info.bootstrap_pc = ddr_base;
high = ddr_base + kernel_size;
}
high = ROUND_UP(high, 1 * MiB);
/* If initrd is available, it goes after the kernel, aligned to 1M. */
if (initrd_filename) {
int initrd_size;
uint32_t initrd_offset;
boot_info.initrd_start = high;
initrd_offset = boot_info.initrd_start - ddr_base;
initrd_size = load_ramdisk(initrd_filename,
boot_info.initrd_start,
ramsize - initrd_offset);
if (initrd_size < 0) {
initrd_size = load_image_targphys(initrd_filename,
boot_info.initrd_start,
ramsize - initrd_offset);
}
if (initrd_size < 0) {
error_report("could not load initrd '%s'",
initrd_filename);
exit(EXIT_FAILURE);
}
high += initrd_size;
}
high = ROUND_UP(high, 4);
boot_info.initrd_end = high;
/* Device tree must be placed right after initrd (if available) */
boot_info.fdt = high;
fdt_size = nios2_load_dtb(boot_info, ramsize, kernel_cmdline,
/* Preference a -dtb argument */
dtb_arg ? dtb_arg : filename);
high += fdt_size;
/* Kernel command is at the end, 4k aligned. */
boot_info.cmdline = ROUND_UP(high, 4 * KiB);
if (kernel_cmdline && strlen(kernel_cmdline)) {
pstrcpy_targphys("cmdline", boot_info.cmdline, 256, kernel_cmdline);
}
}
g_free(filename);
}

View File

@ -1,10 +0,0 @@
#ifndef NIOS2_BOOT_H
#define NIOS2_BOOT_H
#include "cpu.h"
void nios2_load_kernel(Nios2CPU *cpu, hwaddr ddr_base, uint32_t ramsize,
const char *initrd_filename, const char *dtb_filename,
void (*machine_cpu_reset)(Nios2CPU *));
#endif /* NIOS2_BOOT_H */

View File

@ -1,101 +0,0 @@
/*
* Generic simulator target with no MMU or devices. This emulation is
* compatible with the libgloss qemu-hosted.ld linker script for using
* QEMU as an instruction set simulator.
*
* Copyright (c) 2018-2019 Mentor Graphics
*
* Copyright (c) 2016 Marek Vasut <marek.vasut@gmail.com>
*
* Based on LabX device code
*
* Copyright (c) 2012 Chris Wulff <crwulff@gmail.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see
* <http://www.gnu.org/licenses/lgpl-2.1.html>
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "hw/char/serial.h"
#include "hw/boards.h"
#include "exec/memory.h"
#include "exec/address-spaces.h"
#include "qemu/config-file.h"
#include "boot.h"
#define BINARY_DEVICE_TREE_FILE "generic-nommu.dtb"
static void nios2_generic_nommu_init(MachineState *machine)
{
Nios2CPU *cpu;
MemoryRegion *address_space_mem = get_system_memory();
MemoryRegion *phys_tcm = g_new(MemoryRegion, 1);
MemoryRegion *phys_tcm_alias = g_new(MemoryRegion, 1);
MemoryRegion *phys_ram = g_new(MemoryRegion, 1);
MemoryRegion *phys_ram_alias = g_new(MemoryRegion, 1);
ram_addr_t tcm_base = 0x0;
ram_addr_t tcm_size = 0x1000; /* 1 kiB, but QEMU limit is 4 kiB */
ram_addr_t ram_base = 0x10000000;
ram_addr_t ram_size = 0x08000000;
/* Physical TCM (tb_ram_1k) with alias at 0xc0000000 */
memory_region_init_ram(phys_tcm, NULL, "nios2.tcm", tcm_size,
&error_abort);
memory_region_init_alias(phys_tcm_alias, NULL, "nios2.tcm.alias",
phys_tcm, 0, tcm_size);
memory_region_add_subregion(address_space_mem, tcm_base, phys_tcm);
memory_region_add_subregion(address_space_mem, 0xc0000000 + tcm_base,
phys_tcm_alias);
/* Physical DRAM with alias at 0xc0000000 */
memory_region_init_ram(phys_ram, NULL, "nios2.ram", ram_size,
&error_abort);
memory_region_init_alias(phys_ram_alias, NULL, "nios2.ram.alias",
phys_ram, 0, ram_size);
memory_region_add_subregion(address_space_mem, ram_base, phys_ram);
memory_region_add_subregion(address_space_mem, 0xc0000000 + ram_base,
phys_ram_alias);
cpu = NIOS2_CPU(cpu_create(TYPE_NIOS2_CPU));
/* Remove MMU */
cpu->mmu_present = false;
/* Reset vector is the first 32 bytes of RAM. */
cpu->reset_addr = ram_base;
/* The interrupt vector comes right after reset. */
cpu->exception_addr = ram_base + 0x20;
/*
* The linker script does have a TLB miss memory region declared,
* but this should never be used with no MMU.
*/
cpu->fast_tlb_miss_addr = 0x7fff400;
nios2_load_kernel(cpu, ram_base, ram_size, machine->initrd_filename,
BINARY_DEVICE_TREE_FILE, NULL);
}
static void nios2_generic_nommu_machine_init(struct MachineClass *mc)
{
mc->desc = "Generic NOMMU Nios II design";
mc->init = nios2_generic_nommu_init;
mc->deprecation_reason = "Nios II architecture is deprecated";
}
DEFINE_MACHINE("nios2-generic-nommu", nios2_generic_nommu_machine_init);

View File

@ -1,6 +0,0 @@
nios2_ss = ss.source_set()
nios2_ss.add(files('boot.c'), fdt)
nios2_ss.add(when: 'CONFIG_NIOS2_10M50', if_true: files('10m50_devboard.c'))
nios2_ss.add(when: 'CONFIG_NIOS2_GENERIC_NOMMU', if_true: files('generic_nommu.c'))
hw_arch += {'nios2': nios2_ss}

View File

@ -1,3 +0,0 @@
config VMW_PVRDMA
default y if PCI_DEVICES
depends on PVRDMA && MSI_NONBROKEN && VMXNET3_PCI

View File

@ -1,12 +0,0 @@
system_ss.add(when: 'CONFIG_VMW_PVRDMA', if_true: files(
'rdma.c',
'rdma_backend.c',
'rdma_utils.c',
'vmw/pvrdma_qp_ops.c',
))
specific_ss.add(when: 'CONFIG_VMW_PVRDMA', if_true: files(
'rdma_rm.c',
'vmw/pvrdma_cmd.c',
'vmw/pvrdma_dev_ring.c',
'vmw/pvrdma_main.c',
))

View File

@ -1,30 +0,0 @@
/*
* RDMA device interface
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "hw/rdma/rdma.h"
#include "qemu/module.h"
static const TypeInfo rdma_hmp_info = {
.name = INTERFACE_RDMA_PROVIDER,
.parent = TYPE_INTERFACE,
.class_size = sizeof(RdmaProviderClass),
};
static void rdma_register_types(void)
{
type_register_static(&rdma_hmp_info);
}
type_init(rdma_register_types)

File diff suppressed because it is too large Load Diff

View File

@ -1,129 +0,0 @@
/*
* RDMA device: Definitions of Backend Device functions
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef RDMA_BACKEND_H
#define RDMA_BACKEND_H
#include "qapi/error.h"
#include "chardev/char-fe.h"
#include "rdma_rm_defs.h"
#include "rdma_backend_defs.h"
/* Vendor Errors */
#define VENDOR_ERR_FAIL_BACKEND 0x201
#define VENDOR_ERR_TOO_MANY_SGES 0x202
#define VENDOR_ERR_NOMEM 0x203
#define VENDOR_ERR_QP0 0x204
#define VENDOR_ERR_INV_NUM_SGE 0x205
#define VENDOR_ERR_MAD_SEND 0x206
#define VENDOR_ERR_INVLKEY 0x207
#define VENDOR_ERR_MR_SMALL 0x208
#define VENDOR_ERR_INV_MAD_BUFF 0x209
#define VENDOR_ERR_INV_GID_IDX 0x210
/* Add definition for QP0 and QP1 as there is no userspace enums for them */
enum ibv_special_qp_type {
IBV_QPT_SMI = 0,
IBV_QPT_GSI = 1,
};
static inline uint32_t rdma_backend_qpn(const RdmaBackendQP *qp)
{
return qp->ibqp ? qp->ibqp->qp_num : 1;
}
static inline uint32_t rdma_backend_mr_lkey(const RdmaBackendMR *mr)
{
return mr->ibmr ? mr->ibmr->lkey : 0;
}
static inline uint32_t rdma_backend_mr_rkey(const RdmaBackendMR *mr)
{
return mr->ibmr ? mr->ibmr->rkey : 0;
}
int rdma_backend_init(RdmaBackendDev *backend_dev, PCIDevice *pdev,
RdmaDeviceResources *rdma_dev_res,
const char *backend_device_name, uint8_t port_num,
struct ibv_device_attr *dev_attr,
CharBackend *mad_chr_be);
void rdma_backend_fini(RdmaBackendDev *backend_dev);
int rdma_backend_add_gid(RdmaBackendDev *backend_dev, const char *ifname,
union ibv_gid *gid);
int rdma_backend_del_gid(RdmaBackendDev *backend_dev, const char *ifname,
union ibv_gid *gid);
int rdma_backend_get_gid_index(RdmaBackendDev *backend_dev,
union ibv_gid *gid);
void rdma_backend_start(RdmaBackendDev *backend_dev);
void rdma_backend_stop(RdmaBackendDev *backend_dev);
void rdma_backend_register_comp_handler(void (*handler)(void *ctx,
struct ibv_wc *wc));
void rdma_backend_unregister_comp_handler(void);
int rdma_backend_query_port(RdmaBackendDev *backend_dev,
struct ibv_port_attr *port_attr);
int rdma_backend_create_pd(RdmaBackendDev *backend_dev, RdmaBackendPD *pd);
void rdma_backend_destroy_pd(RdmaBackendPD *pd);
int rdma_backend_create_mr(RdmaBackendMR *mr, RdmaBackendPD *pd, void *addr,
size_t length, uint64_t guest_start, int access);
void rdma_backend_destroy_mr(RdmaBackendMR *mr);
int rdma_backend_create_cq(RdmaBackendDev *backend_dev, RdmaBackendCQ *cq,
int cqe);
void rdma_backend_destroy_cq(RdmaBackendCQ *cq);
void rdma_backend_poll_cq(RdmaDeviceResources *rdma_dev_res, RdmaBackendCQ *cq);
int rdma_backend_create_qp(RdmaBackendQP *qp, uint8_t qp_type,
RdmaBackendPD *pd, RdmaBackendCQ *scq,
RdmaBackendCQ *rcq, RdmaBackendSRQ *srq,
uint32_t max_send_wr, uint32_t max_recv_wr,
uint32_t max_send_sge, uint32_t max_recv_sge);
int rdma_backend_qp_state_init(RdmaBackendDev *backend_dev, RdmaBackendQP *qp,
uint8_t qp_type, uint32_t qkey);
int rdma_backend_qp_state_rtr(RdmaBackendDev *backend_dev, RdmaBackendQP *qp,
uint8_t qp_type, uint8_t sgid_idx,
union ibv_gid *dgid, uint32_t dqpn,
uint32_t rq_psn, uint32_t qkey, bool use_qkey);
int rdma_backend_qp_state_rts(RdmaBackendQP *qp, uint8_t qp_type,
uint32_t sq_psn, uint32_t qkey, bool use_qkey);
int rdma_backend_query_qp(RdmaBackendQP *qp, struct ibv_qp_attr *attr,
int attr_mask, struct ibv_qp_init_attr *init_attr);
void rdma_backend_destroy_qp(RdmaBackendQP *qp, RdmaDeviceResources *dev_res);
void rdma_backend_post_send(RdmaBackendDev *backend_dev,
RdmaBackendQP *qp, uint8_t qp_type,
struct ibv_sge *sge, uint32_t num_sge,
uint8_t sgid_idx, union ibv_gid *sgid,
union ibv_gid *dgid, uint32_t dqpn, uint32_t dqkey,
void *ctx);
void rdma_backend_post_recv(RdmaBackendDev *backend_dev,
RdmaBackendQP *qp, uint8_t qp_type,
struct ibv_sge *sge, uint32_t num_sge, void *ctx);
int rdma_backend_create_srq(RdmaBackendSRQ *srq, RdmaBackendPD *pd,
uint32_t max_wr, uint32_t max_sge,
uint32_t srq_limit);
int rdma_backend_query_srq(RdmaBackendSRQ *srq, struct ibv_srq_attr *srq_attr);
int rdma_backend_modify_srq(RdmaBackendSRQ *srq, struct ibv_srq_attr *srq_attr,
int srq_attr_mask);
void rdma_backend_destroy_srq(RdmaBackendSRQ *srq,
RdmaDeviceResources *dev_res);
void rdma_backend_post_srq_recv(RdmaBackendDev *backend_dev,
RdmaBackendSRQ *srq, struct ibv_sge *sge,
uint32_t num_sge, void *ctx);
#endif

View File

@ -1,76 +0,0 @@
/*
* RDMA device: Definitions of Backend Device structures
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef RDMA_BACKEND_DEFS_H
#define RDMA_BACKEND_DEFS_H
#include "qemu/thread.h"
#include "chardev/char-fe.h"
#include <infiniband/verbs.h>
#include "contrib/rdmacm-mux/rdmacm-mux.h"
#include "rdma_utils.h"
typedef struct RdmaDeviceResources RdmaDeviceResources;
typedef struct RdmaBackendThread {
QemuThread thread;
bool run; /* Set by thread manager to let thread know it should exit */
bool is_running; /* Set by the thread to report its status */
} RdmaBackendThread;
typedef struct RdmaCmMux {
CharBackend *chr_be;
int can_receive;
} RdmaCmMux;
typedef struct RdmaBackendDev {
RdmaBackendThread comp_thread;
PCIDevice *dev;
RdmaDeviceResources *rdma_dev_res;
struct ibv_device *ib_dev;
struct ibv_context *context;
struct ibv_comp_channel *channel;
uint8_t port_num;
RdmaProtectedGQueue recv_mads_list;
RdmaCmMux rdmacm_mux;
} RdmaBackendDev;
typedef struct RdmaBackendPD {
struct ibv_pd *ibpd;
} RdmaBackendPD;
typedef struct RdmaBackendMR {
struct ibv_pd *ibpd;
struct ibv_mr *ibmr;
} RdmaBackendMR;
typedef struct RdmaBackendCQ {
RdmaBackendDev *backend_dev;
struct ibv_cq *ibcq;
} RdmaBackendCQ;
typedef struct RdmaBackendQP {
struct ibv_pd *ibpd;
struct ibv_qp *ibqp;
uint8_t sgid_idx;
RdmaProtectedGSList cqe_ctx_list;
} RdmaBackendQP;
typedef struct RdmaBackendSRQ {
struct ibv_srq *ibsrq;
RdmaProtectedGSList cqe_ctx_list;
} RdmaBackendSRQ;
#endif

View File

@ -1,812 +0,0 @@
/*
* QEMU paravirtual RDMA - Resource Manager Implementation
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "cpu.h"
#include "monitor/monitor.h"
#include "trace.h"
#include "rdma_utils.h"
#include "rdma_backend.h"
#include "rdma_rm.h"
void rdma_format_device_counters(RdmaDeviceResources *dev_res, GString *buf)
{
g_string_append_printf(buf, "\ttx : %" PRId64 "\n",
dev_res->stats.tx);
g_string_append_printf(buf, "\ttx_len : %" PRId64 "\n",
dev_res->stats.tx_len);
g_string_append_printf(buf, "\ttx_err : %" PRId64 "\n",
dev_res->stats.tx_err);
g_string_append_printf(buf, "\trx_bufs : %" PRId64 "\n",
dev_res->stats.rx_bufs);
g_string_append_printf(buf, "\trx_srq : %" PRId64 "\n",
dev_res->stats.rx_srq);
g_string_append_printf(buf, "\trx_bufs_len : %" PRId64 "\n",
dev_res->stats.rx_bufs_len);
g_string_append_printf(buf, "\trx_bufs_err : %" PRId64 "\n",
dev_res->stats.rx_bufs_err);
g_string_append_printf(buf, "\tcomps : %" PRId64 "\n",
dev_res->stats.completions);
g_string_append_printf(buf, "\tmissing_comps : %" PRId32 "\n",
dev_res->stats.missing_cqe);
g_string_append_printf(buf, "\tpoll_cq (bk) : %" PRId64 "\n",
dev_res->stats.poll_cq_from_bk);
g_string_append_printf(buf, "\tpoll_cq_ppoll_to : %" PRId64 "\n",
dev_res->stats.poll_cq_ppoll_to);
g_string_append_printf(buf, "\tpoll_cq (fe) : %" PRId64 "\n",
dev_res->stats.poll_cq_from_guest);
g_string_append_printf(buf, "\tpoll_cq_empty : %" PRId64 "\n",
dev_res->stats.poll_cq_from_guest_empty);
g_string_append_printf(buf, "\tmad_tx : %" PRId64 "\n",
dev_res->stats.mad_tx);
g_string_append_printf(buf, "\tmad_tx_err : %" PRId64 "\n",
dev_res->stats.mad_tx_err);
g_string_append_printf(buf, "\tmad_rx : %" PRId64 "\n",
dev_res->stats.mad_rx);
g_string_append_printf(buf, "\tmad_rx_err : %" PRId64 "\n",
dev_res->stats.mad_rx_err);
g_string_append_printf(buf, "\tmad_rx_bufs : %" PRId64 "\n",
dev_res->stats.mad_rx_bufs);
g_string_append_printf(buf, "\tmad_rx_bufs_err : %" PRId64 "\n",
dev_res->stats.mad_rx_bufs_err);
g_string_append_printf(buf, "\tPDs : %" PRId32 "\n",
dev_res->pd_tbl.used);
g_string_append_printf(buf, "\tMRs : %" PRId32 "\n",
dev_res->mr_tbl.used);
g_string_append_printf(buf, "\tUCs : %" PRId32 "\n",
dev_res->uc_tbl.used);
g_string_append_printf(buf, "\tQPs : %" PRId32 "\n",
dev_res->qp_tbl.used);
g_string_append_printf(buf, "\tCQs : %" PRId32 "\n",
dev_res->cq_tbl.used);
g_string_append_printf(buf, "\tCEQ_CTXs : %" PRId32 "\n",
dev_res->cqe_ctx_tbl.used);
}
static inline void res_tbl_init(const char *name, RdmaRmResTbl *tbl,
uint32_t tbl_sz, uint32_t res_sz)
{
tbl->tbl = g_malloc(tbl_sz * res_sz);
strncpy(tbl->name, name, MAX_RM_TBL_NAME);
tbl->name[MAX_RM_TBL_NAME - 1] = 0;
tbl->bitmap = bitmap_new(tbl_sz);
tbl->tbl_sz = tbl_sz;
tbl->res_sz = res_sz;
tbl->used = 0;
qemu_mutex_init(&tbl->lock);
}
static inline void res_tbl_free(RdmaRmResTbl *tbl)
{
if (!tbl->bitmap) {
return;
}
qemu_mutex_destroy(&tbl->lock);
g_free(tbl->tbl);
g_free(tbl->bitmap);
}
static inline void *rdma_res_tbl_get(RdmaRmResTbl *tbl, uint32_t handle)
{
trace_rdma_res_tbl_get(tbl->name, handle);
if ((handle < tbl->tbl_sz) && (test_bit(handle, tbl->bitmap))) {
return tbl->tbl + handle * tbl->res_sz;
} else {
rdma_error_report("Table %s, invalid handle %d", tbl->name, handle);
return NULL;
}
}
static inline void *rdma_res_tbl_alloc(RdmaRmResTbl *tbl, uint32_t *handle)
{
qemu_mutex_lock(&tbl->lock);
*handle = find_first_zero_bit(tbl->bitmap, tbl->tbl_sz);
if (*handle > tbl->tbl_sz) {
rdma_error_report("Table %s, failed to allocate, bitmap is full",
tbl->name);
qemu_mutex_unlock(&tbl->lock);
return NULL;
}
set_bit(*handle, tbl->bitmap);
tbl->used++;
qemu_mutex_unlock(&tbl->lock);
memset(tbl->tbl + *handle * tbl->res_sz, 0, tbl->res_sz);
trace_rdma_res_tbl_alloc(tbl->name, *handle);
return tbl->tbl + *handle * tbl->res_sz;
}
static inline void rdma_res_tbl_dealloc(RdmaRmResTbl *tbl, uint32_t handle)
{
trace_rdma_res_tbl_dealloc(tbl->name, handle);
QEMU_LOCK_GUARD(&tbl->lock);
if (handle < tbl->tbl_sz) {
clear_bit(handle, tbl->bitmap);
tbl->used--;
}
}
int rdma_rm_alloc_pd(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
uint32_t *pd_handle, uint32_t ctx_handle)
{
RdmaRmPD *pd;
int ret = -ENOMEM;
pd = rdma_res_tbl_alloc(&dev_res->pd_tbl, pd_handle);
if (!pd) {
goto out;
}
ret = rdma_backend_create_pd(backend_dev, &pd->backend_pd);
if (ret) {
ret = -EIO;
goto out_tbl_dealloc;
}
pd->ctx_handle = ctx_handle;
return 0;
out_tbl_dealloc:
rdma_res_tbl_dealloc(&dev_res->pd_tbl, *pd_handle);
out:
return ret;
}
RdmaRmPD *rdma_rm_get_pd(RdmaDeviceResources *dev_res, uint32_t pd_handle)
{
return rdma_res_tbl_get(&dev_res->pd_tbl, pd_handle);
}
void rdma_rm_dealloc_pd(RdmaDeviceResources *dev_res, uint32_t pd_handle)
{
RdmaRmPD *pd = rdma_rm_get_pd(dev_res, pd_handle);
if (pd) {
rdma_backend_destroy_pd(&pd->backend_pd);
rdma_res_tbl_dealloc(&dev_res->pd_tbl, pd_handle);
}
}
int rdma_rm_alloc_mr(RdmaDeviceResources *dev_res, uint32_t pd_handle,
uint64_t guest_start, uint64_t guest_length,
void *host_virt, int access_flags, uint32_t *mr_handle,
uint32_t *lkey, uint32_t *rkey)
{
RdmaRmMR *mr;
int ret = 0;
RdmaRmPD *pd;
pd = rdma_rm_get_pd(dev_res, pd_handle);
if (!pd) {
return -EINVAL;
}
mr = rdma_res_tbl_alloc(&dev_res->mr_tbl, mr_handle);
if (!mr) {
return -ENOMEM;
}
trace_rdma_rm_alloc_mr(*mr_handle, host_virt, guest_start, guest_length,
access_flags);
if (host_virt) {
mr->virt = host_virt;
mr->start = guest_start;
mr->length = guest_length;
mr->virt += (mr->start & (TARGET_PAGE_SIZE - 1));
ret = rdma_backend_create_mr(&mr->backend_mr, &pd->backend_pd, mr->virt,
mr->length, guest_start, access_flags);
if (ret) {
ret = -EIO;
goto out_dealloc_mr;
}
#ifdef LEGACY_RDMA_REG_MR
/* We keep mr_handle in lkey so send and recv get get mr ptr */
*lkey = *mr_handle;
#else
*lkey = rdma_backend_mr_lkey(&mr->backend_mr);
#endif
}
*rkey = -1;
mr->pd_handle = pd_handle;
return 0;
out_dealloc_mr:
rdma_res_tbl_dealloc(&dev_res->mr_tbl, *mr_handle);
return ret;
}
RdmaRmMR *rdma_rm_get_mr(RdmaDeviceResources *dev_res, uint32_t mr_handle)
{
return rdma_res_tbl_get(&dev_res->mr_tbl, mr_handle);
}
void rdma_rm_dealloc_mr(RdmaDeviceResources *dev_res, uint32_t mr_handle)
{
RdmaRmMR *mr = rdma_rm_get_mr(dev_res, mr_handle);
if (mr) {
rdma_backend_destroy_mr(&mr->backend_mr);
trace_rdma_rm_dealloc_mr(mr_handle, mr->start);
if (mr->start) {
mr->virt -= (mr->start & (TARGET_PAGE_SIZE - 1));
munmap(mr->virt, mr->length);
}
rdma_res_tbl_dealloc(&dev_res->mr_tbl, mr_handle);
}
}
int rdma_rm_alloc_uc(RdmaDeviceResources *dev_res, uint32_t pfn,
uint32_t *uc_handle)
{
RdmaRmUC *uc;
/* TODO: Need to make sure pfn is between bar start address and
* bsd+RDMA_BAR2_UAR_SIZE
if (pfn > RDMA_BAR2_UAR_SIZE) {
rdma_error_report("pfn out of range (%d > %d)", pfn,
RDMA_BAR2_UAR_SIZE);
return -ENOMEM;
}
*/
uc = rdma_res_tbl_alloc(&dev_res->uc_tbl, uc_handle);
if (!uc) {
return -ENOMEM;
}
return 0;
}
RdmaRmUC *rdma_rm_get_uc(RdmaDeviceResources *dev_res, uint32_t uc_handle)
{
return rdma_res_tbl_get(&dev_res->uc_tbl, uc_handle);
}
void rdma_rm_dealloc_uc(RdmaDeviceResources *dev_res, uint32_t uc_handle)
{
RdmaRmUC *uc = rdma_rm_get_uc(dev_res, uc_handle);
if (uc) {
rdma_res_tbl_dealloc(&dev_res->uc_tbl, uc_handle);
}
}
RdmaRmCQ *rdma_rm_get_cq(RdmaDeviceResources *dev_res, uint32_t cq_handle)
{
return rdma_res_tbl_get(&dev_res->cq_tbl, cq_handle);
}
int rdma_rm_alloc_cq(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
uint32_t cqe, uint32_t *cq_handle, void *opaque)
{
int rc;
RdmaRmCQ *cq;
cq = rdma_res_tbl_alloc(&dev_res->cq_tbl, cq_handle);
if (!cq) {
return -ENOMEM;
}
cq->opaque = opaque;
cq->notify = CNT_CLEAR;
rc = rdma_backend_create_cq(backend_dev, &cq->backend_cq, cqe);
if (rc) {
rc = -EIO;
goto out_dealloc_cq;
}
return 0;
out_dealloc_cq:
rdma_rm_dealloc_cq(dev_res, *cq_handle);
return rc;
}
void rdma_rm_req_notify_cq(RdmaDeviceResources *dev_res, uint32_t cq_handle,
bool notify)
{
RdmaRmCQ *cq;
cq = rdma_rm_get_cq(dev_res, cq_handle);
if (!cq) {
return;
}
if (cq->notify != CNT_SET) {
cq->notify = notify ? CNT_ARM : CNT_CLEAR;
}
}
void rdma_rm_dealloc_cq(RdmaDeviceResources *dev_res, uint32_t cq_handle)
{
RdmaRmCQ *cq;
cq = rdma_rm_get_cq(dev_res, cq_handle);
if (!cq) {
return;
}
rdma_backend_destroy_cq(&cq->backend_cq);
rdma_res_tbl_dealloc(&dev_res->cq_tbl, cq_handle);
}
RdmaRmQP *rdma_rm_get_qp(RdmaDeviceResources *dev_res, uint32_t qpn)
{
GBytes *key = g_bytes_new(&qpn, sizeof(qpn));
RdmaRmQP *qp = g_hash_table_lookup(dev_res->qp_hash, key);
g_bytes_unref(key);
if (!qp) {
rdma_error_report("Invalid QP handle %d", qpn);
}
return qp;
}
int rdma_rm_alloc_qp(RdmaDeviceResources *dev_res, uint32_t pd_handle,
uint8_t qp_type, uint32_t max_send_wr,
uint32_t max_send_sge, uint32_t send_cq_handle,
uint32_t max_recv_wr, uint32_t max_recv_sge,
uint32_t recv_cq_handle, void *opaque, uint32_t *qpn,
uint8_t is_srq, uint32_t srq_handle)
{
int rc;
RdmaRmQP *qp;
RdmaRmCQ *scq, *rcq;
RdmaRmPD *pd;
RdmaRmSRQ *srq = NULL;
uint32_t rm_qpn;
pd = rdma_rm_get_pd(dev_res, pd_handle);
if (!pd) {
return -EINVAL;
}
scq = rdma_rm_get_cq(dev_res, send_cq_handle);
rcq = rdma_rm_get_cq(dev_res, recv_cq_handle);
if (!scq || !rcq) {
rdma_error_report("Invalid send_cqn or recv_cqn (%d, %d)",
send_cq_handle, recv_cq_handle);
return -EINVAL;
}
if (is_srq) {
srq = rdma_rm_get_srq(dev_res, srq_handle);
if (!srq) {
rdma_error_report("Invalid srqn %d", srq_handle);
return -EINVAL;
}
srq->recv_cq_handle = recv_cq_handle;
}
if (qp_type == IBV_QPT_GSI) {
scq->notify = CNT_SET;
rcq->notify = CNT_SET;
}
qp = rdma_res_tbl_alloc(&dev_res->qp_tbl, &rm_qpn);
if (!qp) {
return -ENOMEM;
}
qp->qpn = rm_qpn;
qp->qp_state = IBV_QPS_RESET;
qp->qp_type = qp_type;
qp->send_cq_handle = send_cq_handle;
qp->recv_cq_handle = recv_cq_handle;
qp->opaque = opaque;
qp->is_srq = is_srq;
rc = rdma_backend_create_qp(&qp->backend_qp, qp_type, &pd->backend_pd,
&scq->backend_cq, &rcq->backend_cq,
is_srq ? &srq->backend_srq : NULL,
max_send_wr, max_recv_wr, max_send_sge,
max_recv_sge);
if (rc) {
rc = -EIO;
goto out_dealloc_qp;
}
*qpn = rdma_backend_qpn(&qp->backend_qp);
trace_rdma_rm_alloc_qp(rm_qpn, *qpn, qp_type);
g_hash_table_insert(dev_res->qp_hash, g_bytes_new(qpn, sizeof(*qpn)), qp);
return 0;
out_dealloc_qp:
rdma_res_tbl_dealloc(&dev_res->qp_tbl, qp->qpn);
return rc;
}
int rdma_rm_modify_qp(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
uint32_t qp_handle, uint32_t attr_mask, uint8_t sgid_idx,
union ibv_gid *dgid, uint32_t dqpn,
enum ibv_qp_state qp_state, uint32_t qkey,
uint32_t rq_psn, uint32_t sq_psn)
{
RdmaRmQP *qp;
int ret;
qp = rdma_rm_get_qp(dev_res, qp_handle);
if (!qp) {
return -EINVAL;
}
if (qp->qp_type == IBV_QPT_SMI) {
rdma_error_report("Got QP0 request");
return -EPERM;
} else if (qp->qp_type == IBV_QPT_GSI) {
return 0;
}
trace_rdma_rm_modify_qp(qp_handle, attr_mask, qp_state, sgid_idx);
if (attr_mask & IBV_QP_STATE) {
qp->qp_state = qp_state;
if (qp->qp_state == IBV_QPS_INIT) {
ret = rdma_backend_qp_state_init(backend_dev, &qp->backend_qp,
qp->qp_type, qkey);
if (ret) {
return -EIO;
}
}
if (qp->qp_state == IBV_QPS_RTR) {
/* Get backend gid index */
sgid_idx = rdma_rm_get_backend_gid_index(dev_res, backend_dev,
sgid_idx);
if (sgid_idx <= 0) { /* TODO check also less than bk.max_sgid */
rdma_error_report("Failed to get bk sgid_idx for sgid_idx %d",
sgid_idx);
return -EIO;
}
ret = rdma_backend_qp_state_rtr(backend_dev, &qp->backend_qp,
qp->qp_type, sgid_idx, dgid, dqpn,
rq_psn, qkey,
attr_mask & IBV_QP_QKEY);
if (ret) {
return -EIO;
}
}
if (qp->qp_state == IBV_QPS_RTS) {
ret = rdma_backend_qp_state_rts(&qp->backend_qp, qp->qp_type,
sq_psn, qkey,
attr_mask & IBV_QP_QKEY);
if (ret) {
return -EIO;
}
}
}
return 0;
}
int rdma_rm_query_qp(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
uint32_t qp_handle, struct ibv_qp_attr *attr,
int attr_mask, struct ibv_qp_init_attr *init_attr)
{
RdmaRmQP *qp;
qp = rdma_rm_get_qp(dev_res, qp_handle);
if (!qp) {
return -EINVAL;
}
return rdma_backend_query_qp(&qp->backend_qp, attr, attr_mask, init_attr);
}
void rdma_rm_dealloc_qp(RdmaDeviceResources *dev_res, uint32_t qp_handle)
{
RdmaRmQP *qp;
GBytes *key;
key = g_bytes_new(&qp_handle, sizeof(qp_handle));
qp = g_hash_table_lookup(dev_res->qp_hash, key);
g_hash_table_remove(dev_res->qp_hash, key);
g_bytes_unref(key);
if (!qp) {
return;
}
rdma_backend_destroy_qp(&qp->backend_qp, dev_res);
rdma_res_tbl_dealloc(&dev_res->qp_tbl, qp->qpn);
}
RdmaRmSRQ *rdma_rm_get_srq(RdmaDeviceResources *dev_res, uint32_t srq_handle)
{
return rdma_res_tbl_get(&dev_res->srq_tbl, srq_handle);
}
int rdma_rm_alloc_srq(RdmaDeviceResources *dev_res, uint32_t pd_handle,
uint32_t max_wr, uint32_t max_sge, uint32_t srq_limit,
uint32_t *srq_handle, void *opaque)
{
RdmaRmSRQ *srq;
RdmaRmPD *pd;
int rc;
pd = rdma_rm_get_pd(dev_res, pd_handle);
if (!pd) {
return -EINVAL;
}
srq = rdma_res_tbl_alloc(&dev_res->srq_tbl, srq_handle);
if (!srq) {
return -ENOMEM;
}
rc = rdma_backend_create_srq(&srq->backend_srq, &pd->backend_pd,
max_wr, max_sge, srq_limit);
if (rc) {
rc = -EIO;
goto out_dealloc_srq;
}
srq->opaque = opaque;
return 0;
out_dealloc_srq:
rdma_res_tbl_dealloc(&dev_res->srq_tbl, *srq_handle);
return rc;
}
int rdma_rm_query_srq(RdmaDeviceResources *dev_res, uint32_t srq_handle,
struct ibv_srq_attr *srq_attr)
{
RdmaRmSRQ *srq;
srq = rdma_rm_get_srq(dev_res, srq_handle);
if (!srq) {
return -EINVAL;
}
return rdma_backend_query_srq(&srq->backend_srq, srq_attr);
}
int rdma_rm_modify_srq(RdmaDeviceResources *dev_res, uint32_t srq_handle,
struct ibv_srq_attr *srq_attr, int srq_attr_mask)
{
RdmaRmSRQ *srq;
srq = rdma_rm_get_srq(dev_res, srq_handle);
if (!srq) {
return -EINVAL;
}
if ((srq_attr_mask & IBV_SRQ_LIMIT) &&
(srq_attr->srq_limit == 0)) {
return -EINVAL;
}
if ((srq_attr_mask & IBV_SRQ_MAX_WR) &&
(srq_attr->max_wr == 0)) {
return -EINVAL;
}
return rdma_backend_modify_srq(&srq->backend_srq, srq_attr,
srq_attr_mask);
}
void rdma_rm_dealloc_srq(RdmaDeviceResources *dev_res, uint32_t srq_handle)
{
RdmaRmSRQ *srq;
srq = rdma_rm_get_srq(dev_res, srq_handle);
if (!srq) {
return;
}
rdma_backend_destroy_srq(&srq->backend_srq, dev_res);
rdma_res_tbl_dealloc(&dev_res->srq_tbl, srq_handle);
}
void *rdma_rm_get_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t cqe_ctx_id)
{
void **cqe_ctx;
cqe_ctx = rdma_res_tbl_get(&dev_res->cqe_ctx_tbl, cqe_ctx_id);
if (!cqe_ctx) {
return NULL;
}
return *cqe_ctx;
}
int rdma_rm_alloc_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t *cqe_ctx_id,
void *ctx)
{
void **cqe_ctx;
cqe_ctx = rdma_res_tbl_alloc(&dev_res->cqe_ctx_tbl, cqe_ctx_id);
if (!cqe_ctx) {
return -ENOMEM;
}
*cqe_ctx = ctx;
return 0;
}
void rdma_rm_dealloc_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t cqe_ctx_id)
{
rdma_res_tbl_dealloc(&dev_res->cqe_ctx_tbl, cqe_ctx_id);
}
int rdma_rm_add_gid(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
const char *ifname, union ibv_gid *gid, int gid_idx)
{
int rc;
rc = rdma_backend_add_gid(backend_dev, ifname, gid);
if (rc) {
return -EINVAL;
}
memcpy(&dev_res->port.gid_tbl[gid_idx].gid, gid, sizeof(*gid));
return 0;
}
int rdma_rm_del_gid(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
const char *ifname, int gid_idx)
{
int rc;
if (!dev_res->port.gid_tbl[gid_idx].gid.global.interface_id) {
return 0;
}
rc = rdma_backend_del_gid(backend_dev, ifname,
&dev_res->port.gid_tbl[gid_idx].gid);
if (rc) {
return -EINVAL;
}
memset(dev_res->port.gid_tbl[gid_idx].gid.raw, 0,
sizeof(dev_res->port.gid_tbl[gid_idx].gid));
dev_res->port.gid_tbl[gid_idx].backend_gid_index = -1;
return 0;
}
int rdma_rm_get_backend_gid_index(RdmaDeviceResources *dev_res,
RdmaBackendDev *backend_dev, int sgid_idx)
{
if (unlikely(sgid_idx < 0 || sgid_idx >= MAX_PORT_GIDS)) {
rdma_error_report("Got invalid sgid_idx %d", sgid_idx);
return -EINVAL;
}
if (unlikely(dev_res->port.gid_tbl[sgid_idx].backend_gid_index == -1)) {
dev_res->port.gid_tbl[sgid_idx].backend_gid_index =
rdma_backend_get_gid_index(backend_dev,
&dev_res->port.gid_tbl[sgid_idx].gid);
}
return dev_res->port.gid_tbl[sgid_idx].backend_gid_index;
}
static void destroy_qp_hash_key(gpointer data)
{
g_bytes_unref(data);
}
static void init_ports(RdmaDeviceResources *dev_res)
{
int i;
memset(&dev_res->port, 0, sizeof(dev_res->port));
dev_res->port.state = IBV_PORT_DOWN;
for (i = 0; i < MAX_PORT_GIDS; i++) {
dev_res->port.gid_tbl[i].backend_gid_index = -1;
}
}
static void fini_ports(RdmaDeviceResources *dev_res,
RdmaBackendDev *backend_dev, const char *ifname)
{
int i;
dev_res->port.state = IBV_PORT_DOWN;
for (i = 0; i < MAX_PORT_GIDS; i++) {
rdma_rm_del_gid(dev_res, backend_dev, ifname, i);
}
}
int rdma_rm_init(RdmaDeviceResources *dev_res, struct ibv_device_attr *dev_attr)
{
dev_res->qp_hash = g_hash_table_new_full(g_bytes_hash, g_bytes_equal,
destroy_qp_hash_key, NULL);
if (!dev_res->qp_hash) {
return -ENOMEM;
}
res_tbl_init("PD", &dev_res->pd_tbl, dev_attr->max_pd, sizeof(RdmaRmPD));
res_tbl_init("CQ", &dev_res->cq_tbl, dev_attr->max_cq, sizeof(RdmaRmCQ));
res_tbl_init("MR", &dev_res->mr_tbl, dev_attr->max_mr, sizeof(RdmaRmMR));
res_tbl_init("QP", &dev_res->qp_tbl, dev_attr->max_qp, sizeof(RdmaRmQP));
res_tbl_init("CQE_CTX", &dev_res->cqe_ctx_tbl, dev_attr->max_qp *
dev_attr->max_qp_wr, sizeof(void *));
res_tbl_init("UC", &dev_res->uc_tbl, MAX_UCS, sizeof(RdmaRmUC));
res_tbl_init("SRQ", &dev_res->srq_tbl, dev_attr->max_srq,
sizeof(RdmaRmSRQ));
init_ports(dev_res);
qemu_mutex_init(&dev_res->lock);
memset(&dev_res->stats, 0, sizeof(dev_res->stats));
qatomic_set(&dev_res->stats.missing_cqe, 0);
return 0;
}
void rdma_rm_fini(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
const char *ifname)
{
qemu_mutex_destroy(&dev_res->lock);
fini_ports(dev_res, backend_dev, ifname);
res_tbl_free(&dev_res->srq_tbl);
res_tbl_free(&dev_res->uc_tbl);
res_tbl_free(&dev_res->cqe_ctx_tbl);
res_tbl_free(&dev_res->qp_tbl);
res_tbl_free(&dev_res->mr_tbl);
res_tbl_free(&dev_res->cq_tbl);
res_tbl_free(&dev_res->pd_tbl);
if (dev_res->qp_hash) {
g_hash_table_destroy(dev_res->qp_hash);
}
}

View File

@ -1,97 +0,0 @@
/*
* RDMA device: Definitions of Resource Manager functions
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef RDMA_RM_H
#define RDMA_RM_H
#include "qapi/error.h"
#include "rdma_backend_defs.h"
#include "rdma_rm_defs.h"
int rdma_rm_init(RdmaDeviceResources *dev_res,
struct ibv_device_attr *dev_attr);
void rdma_rm_fini(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
const char *ifname);
int rdma_rm_alloc_pd(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
uint32_t *pd_handle, uint32_t ctx_handle);
RdmaRmPD *rdma_rm_get_pd(RdmaDeviceResources *dev_res, uint32_t pd_handle);
void rdma_rm_dealloc_pd(RdmaDeviceResources *dev_res, uint32_t pd_handle);
int rdma_rm_alloc_mr(RdmaDeviceResources *dev_res, uint32_t pd_handle,
uint64_t guest_start, uint64_t guest_length,
void *host_virt, int access_flags, uint32_t *mr_handle,
uint32_t *lkey, uint32_t *rkey);
RdmaRmMR *rdma_rm_get_mr(RdmaDeviceResources *dev_res, uint32_t mr_handle);
void rdma_rm_dealloc_mr(RdmaDeviceResources *dev_res, uint32_t mr_handle);
int rdma_rm_alloc_uc(RdmaDeviceResources *dev_res, uint32_t pfn,
uint32_t *uc_handle);
RdmaRmUC *rdma_rm_get_uc(RdmaDeviceResources *dev_res, uint32_t uc_handle);
void rdma_rm_dealloc_uc(RdmaDeviceResources *dev_res, uint32_t uc_handle);
int rdma_rm_alloc_cq(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
uint32_t cqe, uint32_t *cq_handle, void *opaque);
RdmaRmCQ *rdma_rm_get_cq(RdmaDeviceResources *dev_res, uint32_t cq_handle);
void rdma_rm_req_notify_cq(RdmaDeviceResources *dev_res, uint32_t cq_handle,
bool notify);
void rdma_rm_dealloc_cq(RdmaDeviceResources *dev_res, uint32_t cq_handle);
int rdma_rm_alloc_qp(RdmaDeviceResources *dev_res, uint32_t pd_handle,
uint8_t qp_type, uint32_t max_send_wr,
uint32_t max_send_sge, uint32_t send_cq_handle,
uint32_t max_recv_wr, uint32_t max_recv_sge,
uint32_t recv_cq_handle, void *opaque, uint32_t *qpn,
uint8_t is_srq, uint32_t srq_handle);
RdmaRmQP *rdma_rm_get_qp(RdmaDeviceResources *dev_res, uint32_t qpn);
int rdma_rm_modify_qp(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
uint32_t qp_handle, uint32_t attr_mask, uint8_t sgid_idx,
union ibv_gid *dgid, uint32_t dqpn,
enum ibv_qp_state qp_state, uint32_t qkey,
uint32_t rq_psn, uint32_t sq_psn);
int rdma_rm_query_qp(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
uint32_t qp_handle, struct ibv_qp_attr *attr,
int attr_mask, struct ibv_qp_init_attr *init_attr);
void rdma_rm_dealloc_qp(RdmaDeviceResources *dev_res, uint32_t qp_handle);
RdmaRmSRQ *rdma_rm_get_srq(RdmaDeviceResources *dev_res, uint32_t srq_handle);
int rdma_rm_alloc_srq(RdmaDeviceResources *dev_res, uint32_t pd_handle,
uint32_t max_wr, uint32_t max_sge, uint32_t srq_limit,
uint32_t *srq_handle, void *opaque);
int rdma_rm_query_srq(RdmaDeviceResources *dev_res, uint32_t srq_handle,
struct ibv_srq_attr *srq_attr);
int rdma_rm_modify_srq(RdmaDeviceResources *dev_res, uint32_t srq_handle,
struct ibv_srq_attr *srq_attr, int srq_attr_mask);
void rdma_rm_dealloc_srq(RdmaDeviceResources *dev_res, uint32_t srq_handle);
int rdma_rm_alloc_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t *cqe_ctx_id,
void *ctx);
void *rdma_rm_get_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t cqe_ctx_id);
void rdma_rm_dealloc_cqe_ctx(RdmaDeviceResources *dev_res, uint32_t cqe_ctx_id);
int rdma_rm_add_gid(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
const char *ifname, union ibv_gid *gid, int gid_idx);
int rdma_rm_del_gid(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev,
const char *ifname, int gid_idx);
int rdma_rm_get_backend_gid_index(RdmaDeviceResources *dev_res,
RdmaBackendDev *backend_dev, int sgid_idx);
static inline union ibv_gid *rdma_rm_get_gid(RdmaDeviceResources *dev_res,
int sgid_idx)
{
return &dev_res->port.gid_tbl[sgid_idx].gid;
}
void rdma_format_device_counters(RdmaDeviceResources *dev_res, GString *buf);
#endif

View File

@ -1,146 +0,0 @@
/*
* RDMA device: Definitions of Resource Manager structures
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef RDMA_RM_DEFS_H
#define RDMA_RM_DEFS_H
#include "rdma_backend_defs.h"
#define MAX_PORTS 1 /* Do not change - we support only one port */
#define MAX_PORT_GIDS 255
#define MAX_GIDS MAX_PORT_GIDS
#define MAX_PORT_PKEYS 1
#define MAX_PKEYS MAX_PORT_PKEYS
#define MAX_UCS 512
#define MAX_MR_SIZE (1UL << 27)
#define MAX_QP 1024
#define MAX_SGE 4
#define MAX_CQ 2048
#define MAX_MR 1024
#define MAX_PD 1024
#define MAX_QP_RD_ATOM 16
#define MAX_QP_INIT_RD_ATOM 16
#define MAX_AH 64
#define MAX_SRQ 512
#define MAX_RM_TBL_NAME 16
#define MAX_CONSEQ_EMPTY_POLL_CQ 4096 /* considered as error above this */
typedef struct RdmaRmResTbl {
char name[MAX_RM_TBL_NAME];
QemuMutex lock;
unsigned long *bitmap;
size_t tbl_sz;
size_t res_sz;
void *tbl;
uint32_t used; /* number of used entries in the table */
} RdmaRmResTbl;
typedef struct RdmaRmPD {
RdmaBackendPD backend_pd;
uint32_t ctx_handle;
} RdmaRmPD;
typedef enum CQNotificationType {
CNT_CLEAR,
CNT_ARM,
CNT_SET,
} CQNotificationType;
typedef struct RdmaRmCQ {
RdmaBackendCQ backend_cq;
void *opaque;
CQNotificationType notify;
} RdmaRmCQ;
/* MR (DMA region) */
typedef struct RdmaRmMR {
RdmaBackendMR backend_mr;
void *virt;
uint64_t start;
size_t length;
uint32_t pd_handle;
uint32_t lkey;
uint32_t rkey;
} RdmaRmMR;
typedef struct RdmaRmUC {
uint64_t uc_handle;
} RdmaRmUC;
typedef struct RdmaRmQP {
RdmaBackendQP backend_qp;
void *opaque;
uint32_t qp_type;
uint32_t qpn;
uint32_t send_cq_handle;
uint32_t recv_cq_handle;
enum ibv_qp_state qp_state;
uint8_t is_srq;
} RdmaRmQP;
typedef struct RdmaRmSRQ {
RdmaBackendSRQ backend_srq;
uint32_t recv_cq_handle;
void *opaque;
} RdmaRmSRQ;
typedef struct RdmaRmGid {
union ibv_gid gid;
int backend_gid_index;
} RdmaRmGid;
typedef struct RdmaRmPort {
RdmaRmGid gid_tbl[MAX_PORT_GIDS];
enum ibv_port_state state;
} RdmaRmPort;
typedef struct RdmaRmStats {
uint64_t tx;
uint64_t tx_len;
uint64_t tx_err;
uint64_t rx_bufs;
uint64_t rx_bufs_len;
uint64_t rx_bufs_err;
uint64_t rx_srq;
uint64_t completions;
uint64_t mad_tx;
uint64_t mad_tx_err;
uint64_t mad_rx;
uint64_t mad_rx_err;
uint64_t mad_rx_bufs;
uint64_t mad_rx_bufs_err;
uint64_t poll_cq_from_bk;
uint64_t poll_cq_from_guest;
uint64_t poll_cq_from_guest_empty;
uint64_t poll_cq_ppoll_to;
uint32_t missing_cqe;
} RdmaRmStats;
struct RdmaDeviceResources {
RdmaRmPort port;
RdmaRmResTbl pd_tbl;
RdmaRmResTbl mr_tbl;
RdmaRmResTbl uc_tbl;
RdmaRmResTbl qp_tbl;
RdmaRmResTbl cq_tbl;
RdmaRmResTbl cqe_ctx_tbl;
RdmaRmResTbl srq_tbl;
GHashTable *qp_hash; /* Keeps mapping between real and emulated */
QemuMutex lock;
RdmaRmStats stats;
};
#endif

View File

@ -1,126 +0,0 @@
/*
* QEMU paravirtual RDMA - Generic RDMA backend
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "hw/pci/pci_device.h"
#include "trace.h"
#include "rdma_utils.h"
void *rdma_pci_dma_map(PCIDevice *dev, dma_addr_t addr, dma_addr_t len)
{
void *p;
dma_addr_t pci_len = len;
if (!addr) {
rdma_error_report("addr is NULL");
return NULL;
}
p = pci_dma_map(dev, addr, &pci_len, DMA_DIRECTION_TO_DEVICE);
if (!p) {
rdma_error_report("pci_dma_map fail, addr=0x%"PRIx64", len=%"PRId64,
addr, pci_len);
return NULL;
}
if (pci_len != len) {
rdma_pci_dma_unmap(dev, p, pci_len);
return NULL;
}
trace_rdma_pci_dma_map(addr, p, pci_len);
return p;
}
void rdma_pci_dma_unmap(PCIDevice *dev, void *buffer, dma_addr_t len)
{
trace_rdma_pci_dma_unmap(buffer);
if (buffer) {
pci_dma_unmap(dev, buffer, len, DMA_DIRECTION_TO_DEVICE, 0);
}
}
void rdma_protected_gqueue_init(RdmaProtectedGQueue *list)
{
qemu_mutex_init(&list->lock);
list->list = g_queue_new();
}
void rdma_protected_gqueue_destroy(RdmaProtectedGQueue *list)
{
if (list->list) {
g_queue_free_full(list->list, g_free);
qemu_mutex_destroy(&list->lock);
list->list = NULL;
}
}
void rdma_protected_gqueue_append_int64(RdmaProtectedGQueue *list,
int64_t value)
{
qemu_mutex_lock(&list->lock);
g_queue_push_tail(list->list, g_memdup(&value, sizeof(value)));
qemu_mutex_unlock(&list->lock);
}
int64_t rdma_protected_gqueue_pop_int64(RdmaProtectedGQueue *list)
{
int64_t *valp;
int64_t val;
qemu_mutex_lock(&list->lock);
valp = g_queue_pop_head(list->list);
qemu_mutex_unlock(&list->lock);
if (!valp) {
return -ENOENT;
}
val = *valp;
g_free(valp);
return val;
}
void rdma_protected_gslist_init(RdmaProtectedGSList *list)
{
qemu_mutex_init(&list->lock);
}
void rdma_protected_gslist_destroy(RdmaProtectedGSList *list)
{
if (list->list) {
g_slist_free(list->list);
qemu_mutex_destroy(&list->lock);
list->list = NULL;
}
}
void rdma_protected_gslist_append_int32(RdmaProtectedGSList *list,
int32_t value)
{
qemu_mutex_lock(&list->lock);
list->list = g_slist_prepend(list->list, GINT_TO_POINTER(value));
qemu_mutex_unlock(&list->lock);
}
void rdma_protected_gslist_remove_int32(RdmaProtectedGSList *list,
int32_t value)
{
qemu_mutex_lock(&list->lock);
list->list = g_slist_remove(list->list, GINT_TO_POINTER(value));
qemu_mutex_unlock(&list->lock);
}

View File

@ -1,63 +0,0 @@
/*
* RDMA device: Debug utilities
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef RDMA_UTILS_H
#define RDMA_UTILS_H
#include "qemu/error-report.h"
#include "sysemu/dma.h"
#define rdma_error_report(fmt, ...) \
error_report("%s: " fmt, "rdma", ## __VA_ARGS__)
#define rdma_warn_report(fmt, ...) \
warn_report("%s: " fmt, "rdma", ## __VA_ARGS__)
#define rdma_info_report(fmt, ...) \
info_report("%s: " fmt, "rdma", ## __VA_ARGS__)
typedef struct RdmaProtectedGQueue {
QemuMutex lock;
GQueue *list;
} RdmaProtectedGQueue;
typedef struct RdmaProtectedGSList {
QemuMutex lock;
GSList *list;
} RdmaProtectedGSList;
void *rdma_pci_dma_map(PCIDevice *dev, dma_addr_t addr, dma_addr_t len);
void rdma_pci_dma_unmap(PCIDevice *dev, void *buffer, dma_addr_t len);
void rdma_protected_gqueue_init(RdmaProtectedGQueue *list);
void rdma_protected_gqueue_destroy(RdmaProtectedGQueue *list);
void rdma_protected_gqueue_append_int64(RdmaProtectedGQueue *list,
int64_t value);
int64_t rdma_protected_gqueue_pop_int64(RdmaProtectedGQueue *list);
void rdma_protected_gslist_init(RdmaProtectedGSList *list);
void rdma_protected_gslist_destroy(RdmaProtectedGSList *list);
void rdma_protected_gslist_append_int32(RdmaProtectedGSList *list,
int32_t value);
void rdma_protected_gslist_remove_int32(RdmaProtectedGSList *list,
int32_t value);
static inline void addrconf_addr_eui48(uint8_t *eui, const char *addr)
{
memcpy(eui, addr, 3);
eui[3] = 0xFF;
eui[4] = 0xFE;
memcpy(eui + 5, addr + 3, 3);
eui[0] ^= 2;
}
#endif

View File

@ -1,31 +0,0 @@
# See docs/devel/tracing.rst for syntax documentation.
# rdma_backend.c
rdma_check_dev_attr(const char *name, int max_bk, int max_fe) "%s: be=%d, fe=%d"
rdma_create_ah_cache_hit(uint64_t subnet, uint64_t if_id) "subnet=0x%"PRIx64",if_id=0x%"PRIx64
rdma_create_ah_cache_miss(uint64_t subnet, uint64_t if_id) "subnet=0x%"PRIx64",if_id=0x%"PRIx64
rdma_poll_cq(int ne, void *ibcq) "Got %d completion(s) from cq %p"
rdmacm_mux(const char *title, int msg_type, int op_code) "%s: msg_type=%d, op_code=%d"
rdmacm_mux_check_op_status(int msg_type, int op_code, int err_code) "resp: msg_type=%d, op_code=%d, err_code=%d"
rdma_mad_message(const char *title, int len, char *data) "mad %s (%d): %s"
rdma_backend_rc_qp_state_init(uint32_t qpn) "RC QP 0x%x switch to INIT"
rdma_backend_ud_qp_state_init(uint32_t qpn, uint32_t qkey) "UD QP 0x%x switch to INIT, qkey=0x%x"
rdma_backend_rc_qp_state_rtr(uint32_t qpn, uint64_t subnet, uint64_t ifid, uint8_t sgid_idx, uint32_t dqpn, uint32_t rq_psn) "RC QP 0x%x switch to RTR, subnet = 0x%"PRIx64", ifid = 0x%"PRIx64 ", sgid_idx=%d, dqpn=0x%x, rq_psn=0x%x"
rdma_backend_ud_qp_state_rtr(uint32_t qpn, uint32_t qkey) "UD QP 0x%x switch to RTR, qkey=0x%x"
rdma_backend_rc_qp_state_rts(uint32_t qpn, uint32_t sq_psn) "RC QP 0x%x switch to RTS, sq_psn=0x%x, "
rdma_backend_ud_qp_state_rts(uint32_t qpn, uint32_t sq_psn, uint32_t qkey) "UD QP 0x%x switch to RTS, sq_psn=0x%x, qkey=0x%x"
rdma_backend_get_gid_index(uint64_t subnet, uint64_t ifid, int gid_idx) "subnet=0x%"PRIx64", ifid=0x%"PRIx64 ", gid_idx=%d"
rdma_backend_gid_change(const char *op, uint64_t subnet, uint64_t ifid) "%s subnet=0x%"PRIx64", ifid=0x%"PRIx64
# rdma_rm.c
rdma_res_tbl_get(char *name, uint32_t handle) "tbl %s, handle %d"
rdma_res_tbl_alloc(char *name, uint32_t handle) "tbl %s, handle %d"
rdma_res_tbl_dealloc(char *name, uint32_t handle) "tbl %s, handle %d"
rdma_rm_alloc_mr(uint32_t mr_handle, void *host_virt, uint64_t guest_start, uint64_t guest_length, int access_flags) "mr_handle=%d, host_virt=%p, guest_start=0x%"PRIx64", length=%" PRId64", access_flags=0x%x"
rdma_rm_dealloc_mr(uint32_t mr_handle, uint64_t guest_start) "mr_handle=%d, guest_start=0x%"PRIx64
rdma_rm_alloc_qp(uint32_t rm_qpn, uint32_t backend_qpn, uint8_t qp_type) "rm_qpn=%d, backend_qpn=0x%x, qp_type=%d"
rdma_rm_modify_qp(uint32_t qpn, uint32_t attr_mask, int qp_state, uint8_t sgid_idx) "qpn=0x%x, attr_mask=0x%x, qp_state=%d, sgid_idx=%d"
# rdma_utils.c
rdma_pci_dma_map(uint64_t addr, void *vaddr, uint64_t len) "0x%"PRIx64" -> %p (len=%" PRIu64")"
rdma_pci_dma_unmap(void *vaddr) "%p"

View File

@ -1 +0,0 @@
#include "trace/trace-hw_rdma.h"

View File

@ -1,144 +0,0 @@
/*
* QEMU VMWARE paravirtual RDMA device definitions
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef PVRDMA_PVRDMA_H
#define PVRDMA_PVRDMA_H
#include "qemu/units.h"
#include "qemu/notify.h"
#include "hw/pci/msix.h"
#include "hw/pci/pci_device.h"
#include "chardev/char-fe.h"
#include "hw/net/vmxnet3_defs.h"
#include "../rdma_backend_defs.h"
#include "../rdma_rm_defs.h"
#include "standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_dev_api.h"
#include "pvrdma_dev_ring.h"
#include "qom/object.h"
/* BARs */
#define RDMA_MSIX_BAR_IDX 0
#define RDMA_REG_BAR_IDX 1
#define RDMA_UAR_BAR_IDX 2
#define RDMA_BAR0_MSIX_SIZE (16 * KiB)
#define RDMA_BAR1_REGS_SIZE 64
#define RDMA_BAR2_UAR_SIZE (0x1000 * MAX_UCS) /* each uc gets page */
/* MSIX */
#define RDMA_MAX_INTRS 3
#define RDMA_MSIX_TABLE 0x0000
#define RDMA_MSIX_PBA 0x2000
/* Interrupts Vectors */
#define INTR_VEC_CMD_RING 0
#define INTR_VEC_CMD_ASYNC_EVENTS 1
#define INTR_VEC_CMD_COMPLETION_Q 2
/* HW attributes */
#define PVRDMA_HW_NAME "pvrdma"
#define PVRDMA_HW_VERSION 17
#define PVRDMA_FW_VERSION 14
/* Some defaults */
#define PVRDMA_PKEY 0xFFFF
typedef struct DSRInfo {
dma_addr_t dma;
struct pvrdma_device_shared_region *dsr;
union pvrdma_cmd_req *req;
union pvrdma_cmd_resp *rsp;
PvrdmaRingState *async_ring_state;
PvrdmaRing async;
PvrdmaRingState *cq_ring_state;
PvrdmaRing cq;
} DSRInfo;
typedef struct PVRDMADevStats {
uint64_t commands;
uint64_t regs_reads;
uint64_t regs_writes;
uint64_t uar_writes;
uint64_t interrupts;
} PVRDMADevStats;
struct PVRDMADev {
PCIDevice parent_obj;
MemoryRegion msix;
MemoryRegion regs;
uint32_t regs_data[RDMA_BAR1_REGS_SIZE];
MemoryRegion uar;
uint32_t uar_data[RDMA_BAR2_UAR_SIZE];
DSRInfo dsr_info;
int interrupt_mask;
struct ibv_device_attr dev_attr;
uint64_t node_guid;
char *backend_eth_device_name;
char *backend_device_name;
uint8_t backend_port_num;
RdmaBackendDev backend_dev;
RdmaDeviceResources rdma_dev_res;
CharBackend mad_chr;
VMXNET3State *func0;
Notifier shutdown_notifier;
PVRDMADevStats stats;
};
typedef struct PVRDMADev PVRDMADev;
DECLARE_INSTANCE_CHECKER(PVRDMADev, PVRDMA_DEV,
PVRDMA_HW_NAME)
static inline int get_reg_val(PVRDMADev *dev, hwaddr addr, uint32_t *val)
{
int idx = addr >> 2;
if (idx >= RDMA_BAR1_REGS_SIZE) {
return -EINVAL;
}
*val = dev->regs_data[idx];
return 0;
}
static inline int set_reg_val(PVRDMADev *dev, hwaddr addr, uint32_t val)
{
int idx = addr >> 2;
if (idx >= RDMA_BAR1_REGS_SIZE) {
return -EINVAL;
}
dev->regs_data[idx] = val;
return 0;
}
static inline void post_interrupt(PVRDMADev *dev, unsigned vector)
{
PCIDevice *pci_dev = PCI_DEVICE(dev);
if (likely(!dev->interrupt_mask)) {
dev->stats.interrupts++;
msix_notify(pci_dev, vector);
}
}
int pvrdma_exec_cmd(PVRDMADev *dev);
#endif

View File

@ -1,815 +0,0 @@
/*
* QEMU paravirtual RDMA - Command channel
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "cpu.h"
#include "hw/pci/pci.h"
#include "hw/pci/pci_ids.h"
#include "../rdma_backend.h"
#include "../rdma_rm.h"
#include "../rdma_utils.h"
#include "trace.h"
#include "pvrdma.h"
#include "standard-headers/rdma/vmw_pvrdma-abi.h"
static void *pvrdma_map_to_pdir(PCIDevice *pdev, uint64_t pdir_dma,
uint32_t nchunks, size_t length)
{
uint64_t *dir, *tbl;
int tbl_idx, dir_idx, addr_idx;
void *host_virt = NULL, *curr_page;
if (!nchunks) {
rdma_error_report("Got nchunks=0");
return NULL;
}
length = ROUND_UP(length, TARGET_PAGE_SIZE);
if (nchunks * TARGET_PAGE_SIZE != length) {
rdma_error_report("Invalid nchunks/length (%u, %lu)", nchunks,
(unsigned long)length);
return NULL;
}
dir = rdma_pci_dma_map(pdev, pdir_dma, TARGET_PAGE_SIZE);
if (!dir) {
rdma_error_report("Failed to map to page directory");
return NULL;
}
tbl = rdma_pci_dma_map(pdev, dir[0], TARGET_PAGE_SIZE);
if (!tbl) {
rdma_error_report("Failed to map to page table 0");
goto out_unmap_dir;
}
curr_page = rdma_pci_dma_map(pdev, (dma_addr_t)tbl[0], TARGET_PAGE_SIZE);
if (!curr_page) {
rdma_error_report("Failed to map the page 0");
goto out_unmap_tbl;
}
host_virt = mremap(curr_page, 0, length, MREMAP_MAYMOVE);
if (host_virt == MAP_FAILED) {
host_virt = NULL;
rdma_error_report("Failed to remap memory for host_virt");
goto out_unmap_tbl;
}
trace_pvrdma_map_to_pdir_host_virt(curr_page, host_virt);
rdma_pci_dma_unmap(pdev, curr_page, TARGET_PAGE_SIZE);
dir_idx = 0;
tbl_idx = 1;
addr_idx = 1;
while (addr_idx < nchunks) {
if (tbl_idx == TARGET_PAGE_SIZE / sizeof(uint64_t)) {
tbl_idx = 0;
dir_idx++;
rdma_pci_dma_unmap(pdev, tbl, TARGET_PAGE_SIZE);
tbl = rdma_pci_dma_map(pdev, dir[dir_idx], TARGET_PAGE_SIZE);
if (!tbl) {
rdma_error_report("Failed to map to page table %d", dir_idx);
goto out_unmap_host_virt;
}
}
curr_page = rdma_pci_dma_map(pdev, (dma_addr_t)tbl[tbl_idx],
TARGET_PAGE_SIZE);
if (!curr_page) {
rdma_error_report("Failed to map to page %d, dir %d", tbl_idx,
dir_idx);
goto out_unmap_host_virt;
}
mremap(curr_page, 0, TARGET_PAGE_SIZE, MREMAP_MAYMOVE | MREMAP_FIXED,
host_virt + TARGET_PAGE_SIZE * addr_idx);
trace_pvrdma_map_to_pdir_next_page(addr_idx, curr_page, host_virt +
TARGET_PAGE_SIZE * addr_idx);
rdma_pci_dma_unmap(pdev, curr_page, TARGET_PAGE_SIZE);
addr_idx++;
tbl_idx++;
}
goto out_unmap_tbl;
out_unmap_host_virt:
munmap(host_virt, length);
host_virt = NULL;
out_unmap_tbl:
rdma_pci_dma_unmap(pdev, tbl, TARGET_PAGE_SIZE);
out_unmap_dir:
rdma_pci_dma_unmap(pdev, dir, TARGET_PAGE_SIZE);
return host_virt;
}
static int query_port(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_query_port *cmd = &req->query_port;
struct pvrdma_cmd_query_port_resp *resp = &rsp->query_port_resp;
struct ibv_port_attr attrs = {};
if (cmd->port_num > MAX_PORTS) {
return -EINVAL;
}
if (rdma_backend_query_port(&dev->backend_dev, &attrs)) {
return -ENOMEM;
}
memset(resp, 0, sizeof(*resp));
/*
* The state, max_mtu and active_mtu fields are enums; the values
* for pvrdma_port_state and pvrdma_mtu match those for
* ibv_port_state and ibv_mtu, so we can cast them safely.
*/
resp->attrs.state = dev->func0->device_active ?
(enum pvrdma_port_state)attrs.state : PVRDMA_PORT_DOWN;
resp->attrs.max_mtu = (enum pvrdma_mtu)attrs.max_mtu;
resp->attrs.active_mtu = (enum pvrdma_mtu)attrs.active_mtu;
resp->attrs.phys_state = attrs.phys_state;
resp->attrs.gid_tbl_len = MIN(MAX_PORT_GIDS, attrs.gid_tbl_len);
resp->attrs.max_msg_sz = 1024;
resp->attrs.pkey_tbl_len = MIN(MAX_PORT_PKEYS, attrs.pkey_tbl_len);
resp->attrs.active_width = 1;
resp->attrs.active_speed = 1;
return 0;
}
static int query_pkey(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_query_pkey *cmd = &req->query_pkey;
struct pvrdma_cmd_query_pkey_resp *resp = &rsp->query_pkey_resp;
if (cmd->port_num > MAX_PORTS) {
return -EINVAL;
}
if (cmd->index > MAX_PKEYS) {
return -EINVAL;
}
memset(resp, 0, sizeof(*resp));
resp->pkey = PVRDMA_PKEY;
return 0;
}
static int create_pd(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_create_pd *cmd = &req->create_pd;
struct pvrdma_cmd_create_pd_resp *resp = &rsp->create_pd_resp;
memset(resp, 0, sizeof(*resp));
return rdma_rm_alloc_pd(&dev->rdma_dev_res, &dev->backend_dev,
&resp->pd_handle, cmd->ctx_handle);
}
static int destroy_pd(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_destroy_pd *cmd = &req->destroy_pd;
rdma_rm_dealloc_pd(&dev->rdma_dev_res, cmd->pd_handle);
return 0;
}
static int create_mr(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_create_mr *cmd = &req->create_mr;
struct pvrdma_cmd_create_mr_resp *resp = &rsp->create_mr_resp;
PCIDevice *pci_dev = PCI_DEVICE(dev);
void *host_virt = NULL;
int rc = 0;
memset(resp, 0, sizeof(*resp));
if (!(cmd->flags & PVRDMA_MR_FLAG_DMA)) {
host_virt = pvrdma_map_to_pdir(pci_dev, cmd->pdir_dma, cmd->nchunks,
cmd->length);
if (!host_virt) {
rdma_error_report("Failed to map to pdir");
return -EINVAL;
}
}
rc = rdma_rm_alloc_mr(&dev->rdma_dev_res, cmd->pd_handle, cmd->start,
cmd->length, host_virt, cmd->access_flags,
&resp->mr_handle, &resp->lkey, &resp->rkey);
if (rc && host_virt) {
munmap(host_virt, cmd->length);
}
return rc;
}
static int destroy_mr(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_destroy_mr *cmd = &req->destroy_mr;
rdma_rm_dealloc_mr(&dev->rdma_dev_res, cmd->mr_handle);
return 0;
}
static int create_cq_ring(PCIDevice *pci_dev , PvrdmaRing **ring,
uint64_t pdir_dma, uint32_t nchunks, uint32_t cqe)
{
uint64_t *dir = NULL, *tbl = NULL;
PvrdmaRing *r;
int rc = -EINVAL;
char ring_name[MAX_RING_NAME_SZ];
if (!nchunks || nchunks > PVRDMA_MAX_FAST_REG_PAGES) {
rdma_error_report("Got invalid nchunks: %d", nchunks);
return rc;
}
dir = rdma_pci_dma_map(pci_dev, pdir_dma, TARGET_PAGE_SIZE);
if (!dir) {
rdma_error_report("Failed to map to CQ page directory");
goto out;
}
tbl = rdma_pci_dma_map(pci_dev, dir[0], TARGET_PAGE_SIZE);
if (!tbl) {
rdma_error_report("Failed to map to CQ page table");
goto out;
}
r = g_malloc(sizeof(*r));
*ring = r;
r->ring_state = rdma_pci_dma_map(pci_dev, tbl[0], TARGET_PAGE_SIZE);
if (!r->ring_state) {
rdma_error_report("Failed to map to CQ ring state");
goto out_free_ring;
}
sprintf(ring_name, "cq_ring_%" PRIx64, pdir_dma);
rc = pvrdma_ring_init(r, ring_name, pci_dev, &r->ring_state[1],
cqe, sizeof(struct pvrdma_cqe),
/* first page is ring state */
(dma_addr_t *)&tbl[1], nchunks - 1);
if (rc) {
goto out_unmap_ring_state;
}
goto out;
out_unmap_ring_state:
/* ring_state was in slot 1, not 0 so need to jump back */
rdma_pci_dma_unmap(pci_dev, --r->ring_state, TARGET_PAGE_SIZE);
out_free_ring:
g_free(r);
out:
rdma_pci_dma_unmap(pci_dev, tbl, TARGET_PAGE_SIZE);
rdma_pci_dma_unmap(pci_dev, dir, TARGET_PAGE_SIZE);
return rc;
}
static void destroy_cq_ring(PvrdmaRing *ring)
{
pvrdma_ring_free(ring);
/* ring_state was in slot 1, not 0 so need to jump back */
rdma_pci_dma_unmap(ring->dev, --ring->ring_state, TARGET_PAGE_SIZE);
g_free(ring);
}
static int create_cq(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_create_cq *cmd = &req->create_cq;
struct pvrdma_cmd_create_cq_resp *resp = &rsp->create_cq_resp;
PvrdmaRing *ring = NULL;
int rc;
memset(resp, 0, sizeof(*resp));
resp->cqe = cmd->cqe;
rc = create_cq_ring(PCI_DEVICE(dev), &ring, cmd->pdir_dma, cmd->nchunks,
cmd->cqe);
if (rc) {
return rc;
}
rc = rdma_rm_alloc_cq(&dev->rdma_dev_res, &dev->backend_dev, cmd->cqe,
&resp->cq_handle, ring);
if (rc) {
destroy_cq_ring(ring);
}
resp->cqe = cmd->cqe;
return rc;
}
static int destroy_cq(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_destroy_cq *cmd = &req->destroy_cq;
RdmaRmCQ *cq;
PvrdmaRing *ring;
cq = rdma_rm_get_cq(&dev->rdma_dev_res, cmd->cq_handle);
if (!cq) {
rdma_error_report("Got invalid CQ handle");
return -EINVAL;
}
ring = (PvrdmaRing *)cq->opaque;
destroy_cq_ring(ring);
rdma_rm_dealloc_cq(&dev->rdma_dev_res, cmd->cq_handle);
return 0;
}
static int create_qp_rings(PCIDevice *pci_dev, uint64_t pdir_dma,
PvrdmaRing **rings, uint32_t scqe, uint32_t smax_sge,
uint32_t spages, uint32_t rcqe, uint32_t rmax_sge,
uint32_t rpages, uint8_t is_srq)
{
uint64_t *dir = NULL, *tbl = NULL;
PvrdmaRing *sr, *rr;
int rc = -EINVAL;
char ring_name[MAX_RING_NAME_SZ];
uint32_t wqe_sz;
if (!spages || spages > PVRDMA_MAX_FAST_REG_PAGES) {
rdma_error_report("Got invalid send page count for QP ring: %d",
spages);
return rc;
}
if (!is_srq && (!rpages || rpages > PVRDMA_MAX_FAST_REG_PAGES)) {
rdma_error_report("Got invalid recv page count for QP ring: %d",
rpages);
return rc;
}
dir = rdma_pci_dma_map(pci_dev, pdir_dma, TARGET_PAGE_SIZE);
if (!dir) {
rdma_error_report("Failed to map to QP page directory");
goto out;
}
tbl = rdma_pci_dma_map(pci_dev, dir[0], TARGET_PAGE_SIZE);
if (!tbl) {
rdma_error_report("Failed to map to QP page table");
goto out;
}
if (!is_srq) {
sr = g_malloc(2 * sizeof(*rr));
rr = &sr[1];
} else {
sr = g_malloc(sizeof(*sr));
}
*rings = sr;
/* Create send ring */
sr->ring_state = rdma_pci_dma_map(pci_dev, tbl[0], TARGET_PAGE_SIZE);
if (!sr->ring_state) {
rdma_error_report("Failed to map to QP ring state");
goto out_free_sr_mem;
}
wqe_sz = pow2ceil(sizeof(struct pvrdma_sq_wqe_hdr) +
sizeof(struct pvrdma_sge) * smax_sge - 1);
sprintf(ring_name, "qp_sring_%" PRIx64, pdir_dma);
rc = pvrdma_ring_init(sr, ring_name, pci_dev, sr->ring_state,
scqe, wqe_sz, (dma_addr_t *)&tbl[1], spages);
if (rc) {
goto out_unmap_ring_state;
}
if (!is_srq) {
/* Create recv ring */
rr->ring_state = &sr->ring_state[1];
wqe_sz = pow2ceil(sizeof(struct pvrdma_rq_wqe_hdr) +
sizeof(struct pvrdma_sge) * rmax_sge - 1);
sprintf(ring_name, "qp_rring_%" PRIx64, pdir_dma);
rc = pvrdma_ring_init(rr, ring_name, pci_dev, rr->ring_state,
rcqe, wqe_sz, (dma_addr_t *)&tbl[1 + spages],
rpages);
if (rc) {
goto out_free_sr;
}
}
goto out;
out_free_sr:
pvrdma_ring_free(sr);
out_unmap_ring_state:
rdma_pci_dma_unmap(pci_dev, sr->ring_state, TARGET_PAGE_SIZE);
out_free_sr_mem:
g_free(sr);
out:
rdma_pci_dma_unmap(pci_dev, tbl, TARGET_PAGE_SIZE);
rdma_pci_dma_unmap(pci_dev, dir, TARGET_PAGE_SIZE);
return rc;
}
static void destroy_qp_rings(PvrdmaRing *ring, uint8_t is_srq)
{
pvrdma_ring_free(&ring[0]);
if (!is_srq) {
pvrdma_ring_free(&ring[1]);
}
rdma_pci_dma_unmap(ring->dev, ring->ring_state, TARGET_PAGE_SIZE);
g_free(ring);
}
static int create_qp(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_create_qp *cmd = &req->create_qp;
struct pvrdma_cmd_create_qp_resp *resp = &rsp->create_qp_resp;
PvrdmaRing *rings = NULL;
int rc;
memset(resp, 0, sizeof(*resp));
rc = create_qp_rings(PCI_DEVICE(dev), cmd->pdir_dma, &rings,
cmd->max_send_wr, cmd->max_send_sge, cmd->send_chunks,
cmd->max_recv_wr, cmd->max_recv_sge,
cmd->total_chunks - cmd->send_chunks - 1, cmd->is_srq);
if (rc) {
return rc;
}
rc = rdma_rm_alloc_qp(&dev->rdma_dev_res, cmd->pd_handle, cmd->qp_type,
cmd->max_send_wr, cmd->max_send_sge,
cmd->send_cq_handle, cmd->max_recv_wr,
cmd->max_recv_sge, cmd->recv_cq_handle, rings,
&resp->qpn, cmd->is_srq, cmd->srq_handle);
if (rc) {
destroy_qp_rings(rings, cmd->is_srq);
return rc;
}
resp->max_send_wr = cmd->max_send_wr;
resp->max_recv_wr = cmd->max_recv_wr;
resp->max_send_sge = cmd->max_send_sge;
resp->max_recv_sge = cmd->max_recv_sge;
resp->max_inline_data = cmd->max_inline_data;
return 0;
}
static int modify_qp(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_modify_qp *cmd = &req->modify_qp;
/* No need to verify sgid_index since it is u8 */
return rdma_rm_modify_qp(&dev->rdma_dev_res, &dev->backend_dev,
cmd->qp_handle, cmd->attr_mask,
cmd->attrs.ah_attr.grh.sgid_index,
(union ibv_gid *)&cmd->attrs.ah_attr.grh.dgid,
cmd->attrs.dest_qp_num,
(enum ibv_qp_state)cmd->attrs.qp_state,
cmd->attrs.qkey, cmd->attrs.rq_psn,
cmd->attrs.sq_psn);
}
static int query_qp(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_query_qp *cmd = &req->query_qp;
struct pvrdma_cmd_query_qp_resp *resp = &rsp->query_qp_resp;
struct ibv_qp_init_attr init_attr;
memset(resp, 0, sizeof(*resp));
return rdma_rm_query_qp(&dev->rdma_dev_res, &dev->backend_dev,
cmd->qp_handle,
(struct ibv_qp_attr *)&resp->attrs,
cmd->attr_mask,
&init_attr);
}
static int destroy_qp(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_destroy_qp *cmd = &req->destroy_qp;
RdmaRmQP *qp;
PvrdmaRing *ring;
qp = rdma_rm_get_qp(&dev->rdma_dev_res, cmd->qp_handle);
if (!qp) {
return -EINVAL;
}
ring = (PvrdmaRing *)qp->opaque;
destroy_qp_rings(ring, qp->is_srq);
rdma_rm_dealloc_qp(&dev->rdma_dev_res, cmd->qp_handle);
return 0;
}
static int create_bind(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_create_bind *cmd = &req->create_bind;
union ibv_gid *gid = (union ibv_gid *)&cmd->new_gid;
if (cmd->index >= MAX_PORT_GIDS) {
return -EINVAL;
}
return rdma_rm_add_gid(&dev->rdma_dev_res, &dev->backend_dev,
dev->backend_eth_device_name, gid, cmd->index);
}
static int destroy_bind(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_destroy_bind *cmd = &req->destroy_bind;
if (cmd->index >= MAX_PORT_GIDS) {
return -EINVAL;
}
return rdma_rm_del_gid(&dev->rdma_dev_res, &dev->backend_dev,
dev->backend_eth_device_name, cmd->index);
}
static int create_uc(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_create_uc *cmd = &req->create_uc;
struct pvrdma_cmd_create_uc_resp *resp = &rsp->create_uc_resp;
memset(resp, 0, sizeof(*resp));
return rdma_rm_alloc_uc(&dev->rdma_dev_res, cmd->pfn, &resp->ctx_handle);
}
static int destroy_uc(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_destroy_uc *cmd = &req->destroy_uc;
rdma_rm_dealloc_uc(&dev->rdma_dev_res, cmd->ctx_handle);
return 0;
}
static int create_srq_ring(PCIDevice *pci_dev, PvrdmaRing **ring,
uint64_t pdir_dma, uint32_t max_wr,
uint32_t max_sge, uint32_t nchunks)
{
uint64_t *dir = NULL, *tbl = NULL;
PvrdmaRing *r;
int rc = -EINVAL;
char ring_name[MAX_RING_NAME_SZ];
uint32_t wqe_sz;
if (!nchunks || nchunks > PVRDMA_MAX_FAST_REG_PAGES) {
rdma_error_report("Got invalid page count for SRQ ring: %d",
nchunks);
return rc;
}
dir = rdma_pci_dma_map(pci_dev, pdir_dma, TARGET_PAGE_SIZE);
if (!dir) {
rdma_error_report("Failed to map to SRQ page directory");
goto out;
}
tbl = rdma_pci_dma_map(pci_dev, dir[0], TARGET_PAGE_SIZE);
if (!tbl) {
rdma_error_report("Failed to map to SRQ page table");
goto out;
}
r = g_malloc(sizeof(*r));
*ring = r;
r->ring_state = rdma_pci_dma_map(pci_dev, tbl[0], TARGET_PAGE_SIZE);
if (!r->ring_state) {
rdma_error_report("Failed to map tp SRQ ring state");
goto out_free_ring_mem;
}
wqe_sz = pow2ceil(sizeof(struct pvrdma_rq_wqe_hdr) +
sizeof(struct pvrdma_sge) * max_sge - 1);
sprintf(ring_name, "srq_ring_%" PRIx64, pdir_dma);
rc = pvrdma_ring_init(r, ring_name, pci_dev, &r->ring_state[1], max_wr,
wqe_sz, (dma_addr_t *)&tbl[1], nchunks - 1);
if (rc) {
goto out_unmap_ring_state;
}
goto out;
out_unmap_ring_state:
rdma_pci_dma_unmap(pci_dev, r->ring_state, TARGET_PAGE_SIZE);
out_free_ring_mem:
g_free(r);
out:
rdma_pci_dma_unmap(pci_dev, tbl, TARGET_PAGE_SIZE);
rdma_pci_dma_unmap(pci_dev, dir, TARGET_PAGE_SIZE);
return rc;
}
static void destroy_srq_ring(PvrdmaRing *ring)
{
pvrdma_ring_free(ring);
rdma_pci_dma_unmap(ring->dev, ring->ring_state, TARGET_PAGE_SIZE);
g_free(ring);
}
static int create_srq(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_create_srq *cmd = &req->create_srq;
struct pvrdma_cmd_create_srq_resp *resp = &rsp->create_srq_resp;
PvrdmaRing *ring = NULL;
int rc;
memset(resp, 0, sizeof(*resp));
rc = create_srq_ring(PCI_DEVICE(dev), &ring, cmd->pdir_dma,
cmd->attrs.max_wr, cmd->attrs.max_sge,
cmd->nchunks);
if (rc) {
return rc;
}
rc = rdma_rm_alloc_srq(&dev->rdma_dev_res, cmd->pd_handle,
cmd->attrs.max_wr, cmd->attrs.max_sge,
cmd->attrs.srq_limit, &resp->srqn, ring);
if (rc) {
destroy_srq_ring(ring);
return rc;
}
return 0;
}
static int query_srq(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_query_srq *cmd = &req->query_srq;
struct pvrdma_cmd_query_srq_resp *resp = &rsp->query_srq_resp;
memset(resp, 0, sizeof(*resp));
return rdma_rm_query_srq(&dev->rdma_dev_res, cmd->srq_handle,
(struct ibv_srq_attr *)&resp->attrs);
}
static int modify_srq(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_modify_srq *cmd = &req->modify_srq;
/* Only support SRQ limit */
if (!(cmd->attr_mask & IBV_SRQ_LIMIT) ||
(cmd->attr_mask & IBV_SRQ_MAX_WR))
return -EINVAL;
return rdma_rm_modify_srq(&dev->rdma_dev_res, cmd->srq_handle,
(struct ibv_srq_attr *)&cmd->attrs,
cmd->attr_mask);
}
static int destroy_srq(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp)
{
struct pvrdma_cmd_destroy_srq *cmd = &req->destroy_srq;
RdmaRmSRQ *srq;
PvrdmaRing *ring;
srq = rdma_rm_get_srq(&dev->rdma_dev_res, cmd->srq_handle);
if (!srq) {
return -EINVAL;
}
ring = (PvrdmaRing *)srq->opaque;
destroy_srq_ring(ring);
rdma_rm_dealloc_srq(&dev->rdma_dev_res, cmd->srq_handle);
return 0;
}
struct cmd_handler {
uint32_t cmd;
uint32_t ack;
int (*exec)(PVRDMADev *dev, union pvrdma_cmd_req *req,
union pvrdma_cmd_resp *rsp);
};
static struct cmd_handler cmd_handlers[] = {
{PVRDMA_CMD_QUERY_PORT, PVRDMA_CMD_QUERY_PORT_RESP, query_port},
{PVRDMA_CMD_QUERY_PKEY, PVRDMA_CMD_QUERY_PKEY_RESP, query_pkey},
{PVRDMA_CMD_CREATE_PD, PVRDMA_CMD_CREATE_PD_RESP, create_pd},
{PVRDMA_CMD_DESTROY_PD, PVRDMA_CMD_DESTROY_PD_RESP_NOOP, destroy_pd},
{PVRDMA_CMD_CREATE_MR, PVRDMA_CMD_CREATE_MR_RESP, create_mr},
{PVRDMA_CMD_DESTROY_MR, PVRDMA_CMD_DESTROY_MR_RESP_NOOP, destroy_mr},
{PVRDMA_CMD_CREATE_CQ, PVRDMA_CMD_CREATE_CQ_RESP, create_cq},
{PVRDMA_CMD_RESIZE_CQ, PVRDMA_CMD_RESIZE_CQ_RESP, NULL},
{PVRDMA_CMD_DESTROY_CQ, PVRDMA_CMD_DESTROY_CQ_RESP_NOOP, destroy_cq},
{PVRDMA_CMD_CREATE_QP, PVRDMA_CMD_CREATE_QP_RESP, create_qp},
{PVRDMA_CMD_MODIFY_QP, PVRDMA_CMD_MODIFY_QP_RESP, modify_qp},
{PVRDMA_CMD_QUERY_QP, PVRDMA_CMD_QUERY_QP_RESP, query_qp},
{PVRDMA_CMD_DESTROY_QP, PVRDMA_CMD_DESTROY_QP_RESP, destroy_qp},
{PVRDMA_CMD_CREATE_UC, PVRDMA_CMD_CREATE_UC_RESP, create_uc},
{PVRDMA_CMD_DESTROY_UC, PVRDMA_CMD_DESTROY_UC_RESP_NOOP, destroy_uc},
{PVRDMA_CMD_CREATE_BIND, PVRDMA_CMD_CREATE_BIND_RESP_NOOP, create_bind},
{PVRDMA_CMD_DESTROY_BIND, PVRDMA_CMD_DESTROY_BIND_RESP_NOOP, destroy_bind},
{PVRDMA_CMD_CREATE_SRQ, PVRDMA_CMD_CREATE_SRQ_RESP, create_srq},
{PVRDMA_CMD_QUERY_SRQ, PVRDMA_CMD_QUERY_SRQ_RESP, query_srq},
{PVRDMA_CMD_MODIFY_SRQ, PVRDMA_CMD_MODIFY_SRQ_RESP, modify_srq},
{PVRDMA_CMD_DESTROY_SRQ, PVRDMA_CMD_DESTROY_SRQ_RESP, destroy_srq},
};
int pvrdma_exec_cmd(PVRDMADev *dev)
{
int err = 0xFFFF;
DSRInfo *dsr_info;
dsr_info = &dev->dsr_info;
if (!dsr_info->dsr) {
/* Buggy or malicious guest driver */
rdma_error_report("Exec command without dsr, req or rsp buffers");
goto out;
}
if (dsr_info->req->hdr.cmd >= sizeof(cmd_handlers) /
sizeof(struct cmd_handler)) {
rdma_error_report("Unsupported command");
goto out;
}
if (!cmd_handlers[dsr_info->req->hdr.cmd].exec) {
rdma_error_report("Unsupported command (not implemented yet)");
goto out;
}
err = cmd_handlers[dsr_info->req->hdr.cmd].exec(dev, dsr_info->req,
dsr_info->rsp);
dsr_info->rsp->hdr.response = dsr_info->req->hdr.response;
dsr_info->rsp->hdr.ack = cmd_handlers[dsr_info->req->hdr.cmd].ack;
dsr_info->rsp->hdr.err = err < 0 ? -err : 0;
trace_pvrdma_exec_cmd(dsr_info->req->hdr.cmd, dsr_info->rsp->hdr.err);
dev->stats.commands++;
out:
set_reg_val(dev, PVRDMA_REG_ERR, err);
post_interrupt(dev, INTR_VEC_CMD_RING);
return (err == 0) ? 0 : -EINVAL;
}

View File

@ -1,141 +0,0 @@
/*
* QEMU paravirtual RDMA - Device rings
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "hw/pci/pci.h"
#include "cpu.h"
#include "qemu/cutils.h"
#include "trace.h"
#include "../rdma_utils.h"
#include "pvrdma_dev_ring.h"
int pvrdma_ring_init(PvrdmaRing *ring, const char *name, PCIDevice *dev,
PvrdmaRingState *ring_state, uint32_t max_elems,
size_t elem_sz, dma_addr_t *tbl, uint32_t npages)
{
int i;
int rc = 0;
pstrcpy(ring->name, MAX_RING_NAME_SZ, name);
ring->dev = dev;
ring->ring_state = ring_state;
ring->max_elems = max_elems;
ring->elem_sz = elem_sz;
/* TODO: Give a moment to think if we want to redo driver settings
qatomic_set(&ring->ring_state->prod_tail, 0);
qatomic_set(&ring->ring_state->cons_head, 0);
*/
ring->npages = npages;
ring->pages = g_new0(void *, npages);
for (i = 0; i < npages; i++) {
if (!tbl[i]) {
rdma_error_report("npages=%d but tbl[%d] is NULL", npages, i);
continue;
}
ring->pages[i] = rdma_pci_dma_map(dev, tbl[i], TARGET_PAGE_SIZE);
if (!ring->pages[i]) {
rc = -ENOMEM;
rdma_error_report("Failed to map to page %d in ring %s", i, name);
goto out_free;
}
memset(ring->pages[i], 0, TARGET_PAGE_SIZE);
}
goto out;
out_free:
while (i--) {
rdma_pci_dma_unmap(dev, ring->pages[i], TARGET_PAGE_SIZE);
}
g_free(ring->pages);
out:
return rc;
}
void *pvrdma_ring_next_elem_read(PvrdmaRing *ring)
{
unsigned int idx, offset;
const uint32_t tail = qatomic_read(&ring->ring_state->prod_tail);
const uint32_t head = qatomic_read(&ring->ring_state->cons_head);
if (tail & ~((ring->max_elems << 1) - 1) ||
head & ~((ring->max_elems << 1) - 1) ||
tail == head) {
trace_pvrdma_ring_next_elem_read_no_data(ring->name);
return NULL;
}
idx = head & (ring->max_elems - 1);
offset = idx * ring->elem_sz;
return ring->pages[offset / TARGET_PAGE_SIZE] + (offset % TARGET_PAGE_SIZE);
}
void pvrdma_ring_read_inc(PvrdmaRing *ring)
{
uint32_t idx = qatomic_read(&ring->ring_state->cons_head);
idx = (idx + 1) & ((ring->max_elems << 1) - 1);
qatomic_set(&ring->ring_state->cons_head, idx);
}
void *pvrdma_ring_next_elem_write(PvrdmaRing *ring)
{
unsigned int idx, offset;
const uint32_t tail = qatomic_read(&ring->ring_state->prod_tail);
const uint32_t head = qatomic_read(&ring->ring_state->cons_head);
if (tail & ~((ring->max_elems << 1) - 1) ||
head & ~((ring->max_elems << 1) - 1) ||
tail == (head ^ ring->max_elems)) {
rdma_error_report("CQ is full");
return NULL;
}
idx = tail & (ring->max_elems - 1);
offset = idx * ring->elem_sz;
return ring->pages[offset / TARGET_PAGE_SIZE] + (offset % TARGET_PAGE_SIZE);
}
void pvrdma_ring_write_inc(PvrdmaRing *ring)
{
uint32_t idx = qatomic_read(&ring->ring_state->prod_tail);
idx = (idx + 1) & ((ring->max_elems << 1) - 1);
qatomic_set(&ring->ring_state->prod_tail, idx);
}
void pvrdma_ring_free(PvrdmaRing *ring)
{
if (!ring) {
return;
}
if (!ring->pages) {
return;
}
while (ring->npages--) {
rdma_pci_dma_unmap(ring->dev, ring->pages[ring->npages],
TARGET_PAGE_SIZE);
}
g_free(ring->pages);
ring->pages = NULL;
}

View File

@ -1,46 +0,0 @@
/*
* QEMU VMWARE paravirtual RDMA ring utilities
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef PVRDMA_DEV_RING_H
#define PVRDMA_DEV_RING_H
#define MAX_RING_NAME_SZ 32
typedef struct PvrdmaRingState {
int prod_tail; /* producer tail */
int cons_head; /* consumer head */
} PvrdmaRingState;
typedef struct PvrdmaRing {
char name[MAX_RING_NAME_SZ];
PCIDevice *dev;
uint32_t max_elems;
size_t elem_sz;
PvrdmaRingState *ring_state; /* used only for unmap */
int npages;
void **pages;
} PvrdmaRing;
int pvrdma_ring_init(PvrdmaRing *ring, const char *name, PCIDevice *dev,
PvrdmaRingState *ring_state, uint32_t max_elems,
size_t elem_sz, dma_addr_t *tbl, uint32_t npages);
void *pvrdma_ring_next_elem_read(PvrdmaRing *ring);
void pvrdma_ring_read_inc(PvrdmaRing *ring);
void *pvrdma_ring_next_elem_write(PvrdmaRing *ring);
void pvrdma_ring_write_inc(PvrdmaRing *ring);
void pvrdma_ring_free(PvrdmaRing *ring);
#endif

View File

@ -1,735 +0,0 @@
/*
* QEMU paravirtual RDMA
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu/module.h"
#include "hw/pci/pci.h"
#include "hw/pci/pci_ids.h"
#include "hw/pci/msi.h"
#include "hw/pci/msix.h"
#include "hw/qdev-properties.h"
#include "hw/qdev-properties-system.h"
#include "cpu.h"
#include "trace.h"
#include "monitor/monitor.h"
#include "hw/rdma/rdma.h"
#include "../rdma_rm.h"
#include "../rdma_backend.h"
#include "../rdma_utils.h"
#include <infiniband/verbs.h>
#include "pvrdma.h"
#include "standard-headers/rdma/vmw_pvrdma-abi.h"
#include "sysemu/runstate.h"
#include "standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_dev_api.h"
#include "pvrdma_qp_ops.h"
static Property pvrdma_dev_properties[] = {
DEFINE_PROP_STRING("netdev", PVRDMADev, backend_eth_device_name),
DEFINE_PROP_STRING("ibdev", PVRDMADev, backend_device_name),
DEFINE_PROP_UINT8("ibport", PVRDMADev, backend_port_num, 1),
DEFINE_PROP_UINT64("dev-caps-max-mr-size", PVRDMADev, dev_attr.max_mr_size,
MAX_MR_SIZE),
DEFINE_PROP_INT32("dev-caps-max-qp", PVRDMADev, dev_attr.max_qp, MAX_QP),
DEFINE_PROP_INT32("dev-caps-max-cq", PVRDMADev, dev_attr.max_cq, MAX_CQ),
DEFINE_PROP_INT32("dev-caps-max-mr", PVRDMADev, dev_attr.max_mr, MAX_MR),
DEFINE_PROP_INT32("dev-caps-max-pd", PVRDMADev, dev_attr.max_pd, MAX_PD),
DEFINE_PROP_INT32("dev-caps-qp-rd-atom", PVRDMADev, dev_attr.max_qp_rd_atom,
MAX_QP_RD_ATOM),
DEFINE_PROP_INT32("dev-caps-max-qp-init-rd-atom", PVRDMADev,
dev_attr.max_qp_init_rd_atom, MAX_QP_INIT_RD_ATOM),
DEFINE_PROP_INT32("dev-caps-max-ah", PVRDMADev, dev_attr.max_ah, MAX_AH),
DEFINE_PROP_INT32("dev-caps-max-srq", PVRDMADev, dev_attr.max_srq, MAX_SRQ),
DEFINE_PROP_CHR("mad-chardev", PVRDMADev, mad_chr),
DEFINE_PROP_END_OF_LIST(),
};
static void pvrdma_format_statistics(RdmaProvider *obj, GString *buf)
{
PVRDMADev *dev = PVRDMA_DEV(obj);
PCIDevice *pdev = PCI_DEVICE(dev);
g_string_append_printf(buf, "%s, %x.%x\n",
pdev->name, PCI_SLOT(pdev->devfn),
PCI_FUNC(pdev->devfn));
g_string_append_printf(buf, "\tcommands : %" PRId64 "\n",
dev->stats.commands);
g_string_append_printf(buf, "\tregs_reads : %" PRId64 "\n",
dev->stats.regs_reads);
g_string_append_printf(buf, "\tregs_writes : %" PRId64 "\n",
dev->stats.regs_writes);
g_string_append_printf(buf, "\tuar_writes : %" PRId64 "\n",
dev->stats.uar_writes);
g_string_append_printf(buf, "\tinterrupts : %" PRId64 "\n",
dev->stats.interrupts);
rdma_format_device_counters(&dev->rdma_dev_res, buf);
}
static void free_dev_ring(PCIDevice *pci_dev, PvrdmaRing *ring,
void *ring_state)
{
pvrdma_ring_free(ring);
rdma_pci_dma_unmap(pci_dev, ring_state, TARGET_PAGE_SIZE);
}
static int init_dev_ring(PvrdmaRing *ring, PvrdmaRingState **ring_state,
const char *name, PCIDevice *pci_dev,
dma_addr_t dir_addr, uint32_t num_pages)
{
uint64_t *dir, *tbl;
int max_pages, rc = 0;
if (!num_pages) {
rdma_error_report("Ring pages count must be strictly positive");
return -EINVAL;
}
/*
* Make sure we can satisfy the requested number of pages in a single
* TARGET_PAGE_SIZE sized page table (taking into account that first entry
* is reserved for ring-state)
*/
max_pages = TARGET_PAGE_SIZE / sizeof(dma_addr_t) - 1;
if (num_pages > max_pages) {
rdma_error_report("Maximum pages on a single directory must not exceed %d\n",
max_pages);
return -EINVAL;
}
dir = rdma_pci_dma_map(pci_dev, dir_addr, TARGET_PAGE_SIZE);
if (!dir) {
rdma_error_report("Failed to map to page directory (ring %s)", name);
rc = -ENOMEM;
goto out;
}
/* We support only one page table for a ring */
tbl = rdma_pci_dma_map(pci_dev, dir[0], TARGET_PAGE_SIZE);
if (!tbl) {
rdma_error_report("Failed to map to page table (ring %s)", name);
rc = -ENOMEM;
goto out_free_dir;
}
*ring_state = rdma_pci_dma_map(pci_dev, tbl[0], TARGET_PAGE_SIZE);
if (!*ring_state) {
rdma_error_report("Failed to map to ring state (ring %s)", name);
rc = -ENOMEM;
goto out_free_tbl;
}
/* RX ring is the second */
(*ring_state)++;
rc = pvrdma_ring_init(ring, name, pci_dev,
(PvrdmaRingState *)*ring_state,
(num_pages - 1) * TARGET_PAGE_SIZE /
sizeof(struct pvrdma_cqne),
sizeof(struct pvrdma_cqne),
(dma_addr_t *)&tbl[1], (dma_addr_t)num_pages - 1);
if (rc) {
rc = -ENOMEM;
goto out_free_ring_state;
}
goto out_free_tbl;
out_free_ring_state:
rdma_pci_dma_unmap(pci_dev, *ring_state, TARGET_PAGE_SIZE);
out_free_tbl:
rdma_pci_dma_unmap(pci_dev, tbl, TARGET_PAGE_SIZE);
out_free_dir:
rdma_pci_dma_unmap(pci_dev, dir, TARGET_PAGE_SIZE);
out:
return rc;
}
static void free_dsr(PVRDMADev *dev)
{
PCIDevice *pci_dev = PCI_DEVICE(dev);
if (!dev->dsr_info.dsr) {
return;
}
free_dev_ring(pci_dev, &dev->dsr_info.async,
dev->dsr_info.async_ring_state);
free_dev_ring(pci_dev, &dev->dsr_info.cq, dev->dsr_info.cq_ring_state);
rdma_pci_dma_unmap(pci_dev, dev->dsr_info.req,
sizeof(union pvrdma_cmd_req));
rdma_pci_dma_unmap(pci_dev, dev->dsr_info.rsp,
sizeof(union pvrdma_cmd_resp));
rdma_pci_dma_unmap(pci_dev, dev->dsr_info.dsr,
sizeof(struct pvrdma_device_shared_region));
dev->dsr_info.dsr = NULL;
}
static int load_dsr(PVRDMADev *dev)
{
int rc = 0;
PCIDevice *pci_dev = PCI_DEVICE(dev);
DSRInfo *dsr_info;
struct pvrdma_device_shared_region *dsr;
free_dsr(dev);
/* Map to DSR */
dev->dsr_info.dsr = rdma_pci_dma_map(pci_dev, dev->dsr_info.dma,
sizeof(struct pvrdma_device_shared_region));
if (!dev->dsr_info.dsr) {
rdma_error_report("Failed to map to DSR");
rc = -ENOMEM;
goto out;
}
/* Shortcuts */
dsr_info = &dev->dsr_info;
dsr = dsr_info->dsr;
/* Map to command slot */
dsr_info->req = rdma_pci_dma_map(pci_dev, dsr->cmd_slot_dma,
sizeof(union pvrdma_cmd_req));
if (!dsr_info->req) {
rdma_error_report("Failed to map to command slot address");
rc = -ENOMEM;
goto out_free_dsr;
}
/* Map to response slot */
dsr_info->rsp = rdma_pci_dma_map(pci_dev, dsr->resp_slot_dma,
sizeof(union pvrdma_cmd_resp));
if (!dsr_info->rsp) {
rdma_error_report("Failed to map to response slot address");
rc = -ENOMEM;
goto out_free_req;
}
/* Map to CQ notification ring */
rc = init_dev_ring(&dsr_info->cq, &dsr_info->cq_ring_state, "dev_cq",
pci_dev, dsr->cq_ring_pages.pdir_dma,
dsr->cq_ring_pages.num_pages);
if (rc) {
rc = -ENOMEM;
goto out_free_rsp;
}
/* Map to event notification ring */
rc = init_dev_ring(&dsr_info->async, &dsr_info->async_ring_state,
"dev_async", pci_dev, dsr->async_ring_pages.pdir_dma,
dsr->async_ring_pages.num_pages);
if (rc) {
rc = -ENOMEM;
goto out_free_rsp;
}
goto out;
out_free_rsp:
rdma_pci_dma_unmap(pci_dev, dsr_info->rsp, sizeof(union pvrdma_cmd_resp));
out_free_req:
rdma_pci_dma_unmap(pci_dev, dsr_info->req, sizeof(union pvrdma_cmd_req));
out_free_dsr:
rdma_pci_dma_unmap(pci_dev, dsr_info->dsr,
sizeof(struct pvrdma_device_shared_region));
dsr_info->dsr = NULL;
out:
return rc;
}
static void init_dsr_dev_caps(PVRDMADev *dev)
{
struct pvrdma_device_shared_region *dsr;
if (!dev->dsr_info.dsr) {
/* Buggy or malicious guest driver */
rdma_error_report("Can't initialized DSR");
return;
}
dsr = dev->dsr_info.dsr;
dsr->caps.fw_ver = PVRDMA_FW_VERSION;
dsr->caps.mode = PVRDMA_DEVICE_MODE_ROCE;
dsr->caps.gid_types |= PVRDMA_GID_TYPE_FLAG_ROCE_V1;
dsr->caps.max_uar = RDMA_BAR2_UAR_SIZE;
dsr->caps.max_mr_size = dev->dev_attr.max_mr_size;
dsr->caps.max_qp = dev->dev_attr.max_qp;
dsr->caps.max_qp_wr = dev->dev_attr.max_qp_wr;
dsr->caps.max_sge = dev->dev_attr.max_sge;
dsr->caps.max_cq = dev->dev_attr.max_cq;
dsr->caps.max_cqe = dev->dev_attr.max_cqe;
dsr->caps.max_mr = dev->dev_attr.max_mr;
dsr->caps.max_pd = dev->dev_attr.max_pd;
dsr->caps.max_ah = dev->dev_attr.max_ah;
dsr->caps.max_srq = dev->dev_attr.max_srq;
dsr->caps.max_srq_wr = dev->dev_attr.max_srq_wr;
dsr->caps.max_srq_sge = dev->dev_attr.max_srq_sge;
dsr->caps.gid_tbl_len = MAX_GIDS;
dsr->caps.sys_image_guid = 0;
dsr->caps.node_guid = dev->node_guid;
dsr->caps.phys_port_cnt = MAX_PORTS;
dsr->caps.max_pkeys = MAX_PKEYS;
}
static void uninit_msix(PCIDevice *pdev, int used_vectors)
{
PVRDMADev *dev = PVRDMA_DEV(pdev);
int i;
for (i = 0; i < used_vectors; i++) {
msix_vector_unuse(pdev, i);
}
msix_uninit(pdev, &dev->msix, &dev->msix);
}
static int init_msix(PCIDevice *pdev)
{
PVRDMADev *dev = PVRDMA_DEV(pdev);
int i;
int rc;
rc = msix_init(pdev, RDMA_MAX_INTRS, &dev->msix, RDMA_MSIX_BAR_IDX,
RDMA_MSIX_TABLE, &dev->msix, RDMA_MSIX_BAR_IDX,
RDMA_MSIX_PBA, 0, NULL);
if (rc < 0) {
rdma_error_report("Failed to initialize MSI-X");
return rc;
}
for (i = 0; i < RDMA_MAX_INTRS; i++) {
msix_vector_use(PCI_DEVICE(dev), i);
}
return 0;
}
static void pvrdma_fini(PCIDevice *pdev)
{
PVRDMADev *dev = PVRDMA_DEV(pdev);
notifier_remove(&dev->shutdown_notifier);
pvrdma_qp_ops_fini();
rdma_backend_stop(&dev->backend_dev);
rdma_rm_fini(&dev->rdma_dev_res, &dev->backend_dev,
dev->backend_eth_device_name);
rdma_backend_fini(&dev->backend_dev);
free_dsr(dev);
if (msix_enabled(pdev)) {
uninit_msix(pdev, RDMA_MAX_INTRS);
}
rdma_info_report("Device %s %x.%x is down", pdev->name,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
}
static void pvrdma_stop(PVRDMADev *dev)
{
rdma_backend_stop(&dev->backend_dev);
}
static void pvrdma_start(PVRDMADev *dev)
{
rdma_backend_start(&dev->backend_dev);
}
static void activate_device(PVRDMADev *dev)
{
pvrdma_start(dev);
set_reg_val(dev, PVRDMA_REG_ERR, 0);
}
static int unquiesce_device(PVRDMADev *dev)
{
return 0;
}
static void reset_device(PVRDMADev *dev)
{
pvrdma_stop(dev);
}
static uint64_t pvrdma_regs_read(void *opaque, hwaddr addr, unsigned size)
{
PVRDMADev *dev = opaque;
uint32_t val;
dev->stats.regs_reads++;
if (get_reg_val(dev, addr, &val)) {
rdma_error_report("Failed to read REG value from address 0x%x",
(uint32_t)addr);
return -EINVAL;
}
trace_pvrdma_regs_read(addr, val);
return val;
}
static void pvrdma_regs_write(void *opaque, hwaddr addr, uint64_t val,
unsigned size)
{
PVRDMADev *dev = opaque;
dev->stats.regs_writes++;
if (set_reg_val(dev, addr, val)) {
rdma_error_report("Failed to set REG value, addr=0x%"PRIx64 ", val=0x%"PRIx64,
addr, val);
return;
}
switch (addr) {
case PVRDMA_REG_DSRLOW:
trace_pvrdma_regs_write(addr, val, "DSRLOW", "");
dev->dsr_info.dma = val;
break;
case PVRDMA_REG_DSRHIGH:
trace_pvrdma_regs_write(addr, val, "DSRHIGH", "");
dev->dsr_info.dma |= val << 32;
load_dsr(dev);
init_dsr_dev_caps(dev);
break;
case PVRDMA_REG_CTL:
switch (val) {
case PVRDMA_DEVICE_CTL_ACTIVATE:
trace_pvrdma_regs_write(addr, val, "CTL", "ACTIVATE");
activate_device(dev);
break;
case PVRDMA_DEVICE_CTL_UNQUIESCE:
trace_pvrdma_regs_write(addr, val, "CTL", "UNQUIESCE");
unquiesce_device(dev);
break;
case PVRDMA_DEVICE_CTL_RESET:
trace_pvrdma_regs_write(addr, val, "CTL", "URESET");
reset_device(dev);
break;
}
break;
case PVRDMA_REG_IMR:
trace_pvrdma_regs_write(addr, val, "INTR_MASK", "");
dev->interrupt_mask = val;
break;
case PVRDMA_REG_REQUEST:
if (val == 0) {
trace_pvrdma_regs_write(addr, val, "REQUEST", "");
pvrdma_exec_cmd(dev);
}
break;
default:
break;
}
}
static const MemoryRegionOps regs_ops = {
.read = pvrdma_regs_read,
.write = pvrdma_regs_write,
.endianness = DEVICE_LITTLE_ENDIAN,
.impl = {
.min_access_size = sizeof(uint32_t),
.max_access_size = sizeof(uint32_t),
},
};
static uint64_t pvrdma_uar_read(void *opaque, hwaddr addr, unsigned size)
{
return 0xffffffff;
}
static void pvrdma_uar_write(void *opaque, hwaddr addr, uint64_t val,
unsigned size)
{
PVRDMADev *dev = opaque;
dev->stats.uar_writes++;
switch (addr & 0xFFF) { /* Mask with 0xFFF as each UC gets page */
case PVRDMA_UAR_QP_OFFSET:
if (val & PVRDMA_UAR_QP_SEND) {
trace_pvrdma_uar_write(addr, val, "QP", "SEND",
val & PVRDMA_UAR_HANDLE_MASK, 0);
pvrdma_qp_send(dev, val & PVRDMA_UAR_HANDLE_MASK);
}
if (val & PVRDMA_UAR_QP_RECV) {
trace_pvrdma_uar_write(addr, val, "QP", "RECV",
val & PVRDMA_UAR_HANDLE_MASK, 0);
pvrdma_qp_recv(dev, val & PVRDMA_UAR_HANDLE_MASK);
}
break;
case PVRDMA_UAR_CQ_OFFSET:
if (val & PVRDMA_UAR_CQ_ARM) {
trace_pvrdma_uar_write(addr, val, "CQ", "ARM",
val & PVRDMA_UAR_HANDLE_MASK,
!!(val & PVRDMA_UAR_CQ_ARM_SOL));
rdma_rm_req_notify_cq(&dev->rdma_dev_res,
val & PVRDMA_UAR_HANDLE_MASK,
!!(val & PVRDMA_UAR_CQ_ARM_SOL));
}
if (val & PVRDMA_UAR_CQ_ARM_SOL) {
trace_pvrdma_uar_write(addr, val, "CQ", "ARMSOL - not supported", 0,
0);
}
if (val & PVRDMA_UAR_CQ_POLL) {
trace_pvrdma_uar_write(addr, val, "CQ", "POLL",
val & PVRDMA_UAR_HANDLE_MASK, 0);
pvrdma_cq_poll(&dev->rdma_dev_res, val & PVRDMA_UAR_HANDLE_MASK);
}
break;
case PVRDMA_UAR_SRQ_OFFSET:
if (val & PVRDMA_UAR_SRQ_RECV) {
trace_pvrdma_uar_write(addr, val, "QP", "SRQ",
val & PVRDMA_UAR_HANDLE_MASK, 0);
pvrdma_srq_recv(dev, val & PVRDMA_UAR_HANDLE_MASK);
}
break;
default:
rdma_error_report("Unsupported command, addr=0x%"PRIx64", val=0x%"PRIx64,
addr, val);
break;
}
}
static const MemoryRegionOps uar_ops = {
.read = pvrdma_uar_read,
.write = pvrdma_uar_write,
.endianness = DEVICE_LITTLE_ENDIAN,
.impl = {
.min_access_size = sizeof(uint32_t),
.max_access_size = sizeof(uint32_t),
},
};
static void init_pci_config(PCIDevice *pdev)
{
pdev->config[PCI_INTERRUPT_PIN] = 1;
}
static void init_bars(PCIDevice *pdev)
{
PVRDMADev *dev = PVRDMA_DEV(pdev);
/* BAR 0 - MSI-X */
memory_region_init(&dev->msix, OBJECT(dev), "pvrdma-msix",
RDMA_BAR0_MSIX_SIZE);
pci_register_bar(pdev, RDMA_MSIX_BAR_IDX, PCI_BASE_ADDRESS_SPACE_MEMORY,
&dev->msix);
/* BAR 1 - Registers */
memset(&dev->regs_data, 0, sizeof(dev->regs_data));
memory_region_init_io(&dev->regs, OBJECT(dev), &regs_ops, dev,
"pvrdma-regs", sizeof(dev->regs_data));
pci_register_bar(pdev, RDMA_REG_BAR_IDX, PCI_BASE_ADDRESS_SPACE_MEMORY,
&dev->regs);
/* BAR 2 - UAR */
memset(&dev->uar_data, 0, sizeof(dev->uar_data));
memory_region_init_io(&dev->uar, OBJECT(dev), &uar_ops, dev, "rdma-uar",
sizeof(dev->uar_data));
pci_register_bar(pdev, RDMA_UAR_BAR_IDX, PCI_BASE_ADDRESS_SPACE_MEMORY,
&dev->uar);
}
static void init_regs(PCIDevice *pdev)
{
PVRDMADev *dev = PVRDMA_DEV(pdev);
set_reg_val(dev, PVRDMA_REG_VERSION, PVRDMA_HW_VERSION);
set_reg_val(dev, PVRDMA_REG_ERR, 0xFFFF);
}
static void init_dev_caps(PVRDMADev *dev)
{
size_t pg_tbl_bytes = TARGET_PAGE_SIZE *
(TARGET_PAGE_SIZE / sizeof(uint64_t));
size_t wr_sz = MAX(sizeof(struct pvrdma_sq_wqe_hdr),
sizeof(struct pvrdma_rq_wqe_hdr));
dev->dev_attr.max_qp_wr = pg_tbl_bytes /
(wr_sz + sizeof(struct pvrdma_sge) *
dev->dev_attr.max_sge) - TARGET_PAGE_SIZE;
/* First page is ring state ^^^^ */
dev->dev_attr.max_cqe = pg_tbl_bytes / sizeof(struct pvrdma_cqe) -
TARGET_PAGE_SIZE; /* First page is ring state */
dev->dev_attr.max_srq_wr = pg_tbl_bytes /
((sizeof(struct pvrdma_rq_wqe_hdr) +
sizeof(struct pvrdma_sge)) *
dev->dev_attr.max_sge) - TARGET_PAGE_SIZE;
}
static int pvrdma_check_ram_shared(Object *obj, void *opaque)
{
bool *shared = opaque;
if (object_dynamic_cast(obj, "memory-backend-ram")) {
*shared = object_property_get_bool(obj, "share", NULL);
}
return 0;
}
static void pvrdma_shutdown_notifier(Notifier *n, void *opaque)
{
PVRDMADev *dev = container_of(n, PVRDMADev, shutdown_notifier);
PCIDevice *pci_dev = PCI_DEVICE(dev);
pvrdma_fini(pci_dev);
}
static void pvrdma_realize(PCIDevice *pdev, Error **errp)
{
int rc = 0;
PVRDMADev *dev = PVRDMA_DEV(pdev);
Object *memdev_root;
bool ram_shared = false;
PCIDevice *func0;
warn_report_once("pvrdma is deprecated and will be removed in a future release");
rdma_info_report("Initializing device %s %x.%x", pdev->name,
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
if (TARGET_PAGE_SIZE != qemu_real_host_page_size()) {
error_setg(errp, "Target page size must be the same as host page size");
return;
}
func0 = pci_get_function_0(pdev);
/* Break if not vmxnet3 device in slot 0 */
if (strcmp(object_get_typename(OBJECT(func0)), TYPE_VMXNET3)) {
error_setg(errp, "Device on %x.0 must be %s", PCI_SLOT(pdev->devfn),
TYPE_VMXNET3);
return;
}
dev->func0 = VMXNET3(func0);
addrconf_addr_eui48((unsigned char *)&dev->node_guid,
(const char *)&dev->func0->conf.macaddr.a);
memdev_root = object_resolve_path("/objects", NULL);
if (memdev_root) {
object_child_foreach(memdev_root, pvrdma_check_ram_shared, &ram_shared);
}
if (!ram_shared) {
error_setg(errp, "Only shared memory backed ram is supported");
return;
}
dev->dsr_info.dsr = NULL;
init_pci_config(pdev);
init_bars(pdev);
init_regs(pdev);
rc = init_msix(pdev);
if (rc) {
goto out;
}
rc = rdma_backend_init(&dev->backend_dev, pdev, &dev->rdma_dev_res,
dev->backend_device_name, dev->backend_port_num,
&dev->dev_attr, &dev->mad_chr);
if (rc) {
goto out;
}
init_dev_caps(dev);
rc = rdma_rm_init(&dev->rdma_dev_res, &dev->dev_attr);
if (rc) {
goto out;
}
rc = pvrdma_qp_ops_init();
if (rc) {
goto out;
}
memset(&dev->stats, 0, sizeof(dev->stats));
dev->shutdown_notifier.notify = pvrdma_shutdown_notifier;
qemu_register_shutdown_notifier(&dev->shutdown_notifier);
#ifdef LEGACY_RDMA_REG_MR
rdma_info_report("Using legacy reg_mr");
#else
rdma_info_report("Using iova reg_mr");
#endif
out:
if (rc) {
pvrdma_fini(pdev);
error_append_hint(errp, "Device failed to load\n");
}
}
static void pvrdma_class_init(ObjectClass *klass, void *data)
{
DeviceClass *dc = DEVICE_CLASS(klass);
PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
RdmaProviderClass *ir = RDMA_PROVIDER_CLASS(klass);
k->realize = pvrdma_realize;
k->vendor_id = PCI_VENDOR_ID_VMWARE;
k->device_id = PCI_DEVICE_ID_VMWARE_PVRDMA;
k->revision = 0x00;
k->class_id = PCI_CLASS_NETWORK_OTHER;
dc->desc = "RDMA Device";
device_class_set_props(dc, pvrdma_dev_properties);
set_bit(DEVICE_CATEGORY_NETWORK, dc->categories);
ir->format_statistics = pvrdma_format_statistics;
}
static const TypeInfo pvrdma_info = {
.name = PVRDMA_HW_NAME,
.parent = TYPE_PCI_DEVICE,
.instance_size = sizeof(PVRDMADev),
.class_init = pvrdma_class_init,
.interfaces = (InterfaceInfo[]) {
{ INTERFACE_CONVENTIONAL_PCI_DEVICE },
{ INTERFACE_RDMA_PROVIDER },
{ }
}
};
static void register_types(void)
{
type_register_static(&pvrdma_info);
}
type_init(register_types)

View File

@ -1,298 +0,0 @@
/*
* QEMU paravirtual RDMA - QP implementation
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include "qemu/osdep.h"
#include "../rdma_utils.h"
#include "../rdma_rm.h"
#include "../rdma_backend.h"
#include "trace.h"
#include "pvrdma.h"
#include "standard-headers/rdma/vmw_pvrdma-abi.h"
#include "pvrdma_qp_ops.h"
typedef struct CompHandlerCtx {
PVRDMADev *dev;
uint32_t cq_handle;
struct pvrdma_cqe cqe;
} CompHandlerCtx;
/* Send Queue WQE */
typedef struct PvrdmaSqWqe {
struct pvrdma_sq_wqe_hdr hdr;
struct pvrdma_sge sge[];
} PvrdmaSqWqe;
/* Recv Queue WQE */
typedef struct PvrdmaRqWqe {
struct pvrdma_rq_wqe_hdr hdr;
struct pvrdma_sge sge[];
} PvrdmaRqWqe;
/*
* 1. Put CQE on send CQ ring
* 2. Put CQ number on dsr completion ring
* 3. Interrupt host
*/
static int pvrdma_post_cqe(PVRDMADev *dev, uint32_t cq_handle,
struct pvrdma_cqe *cqe, struct ibv_wc *wc)
{
struct pvrdma_cqe *cqe1;
struct pvrdma_cqne *cqne;
PvrdmaRing *ring;
RdmaRmCQ *cq = rdma_rm_get_cq(&dev->rdma_dev_res, cq_handle);
if (unlikely(!cq)) {
return -EINVAL;
}
ring = (PvrdmaRing *)cq->opaque;
/* Step #1: Put CQE on CQ ring */
cqe1 = pvrdma_ring_next_elem_write(ring);
if (unlikely(!cqe1)) {
return -EINVAL;
}
memset(cqe1, 0, sizeof(*cqe1));
cqe1->wr_id = cqe->wr_id;
cqe1->qp = cqe->qp ? cqe->qp : wc->qp_num;
cqe1->opcode = cqe->opcode;
cqe1->status = wc->status;
cqe1->byte_len = wc->byte_len;
cqe1->src_qp = wc->src_qp;
cqe1->wc_flags = wc->wc_flags;
cqe1->vendor_err = wc->vendor_err;
trace_pvrdma_post_cqe(cq_handle, cq->notify, cqe1->wr_id, cqe1->qp,
cqe1->opcode, cqe1->status, cqe1->byte_len,
cqe1->src_qp, cqe1->wc_flags, cqe1->vendor_err);
pvrdma_ring_write_inc(ring);
/* Step #2: Put CQ number on dsr completion ring */
cqne = pvrdma_ring_next_elem_write(&dev->dsr_info.cq);
if (unlikely(!cqne)) {
return -EINVAL;
}
cqne->info = cq_handle;
pvrdma_ring_write_inc(&dev->dsr_info.cq);
if (cq->notify != CNT_CLEAR) {
if (cq->notify == CNT_ARM) {
cq->notify = CNT_CLEAR;
}
post_interrupt(dev, INTR_VEC_CMD_COMPLETION_Q);
}
return 0;
}
static void pvrdma_qp_ops_comp_handler(void *ctx, struct ibv_wc *wc)
{
CompHandlerCtx *comp_ctx = (CompHandlerCtx *)ctx;
pvrdma_post_cqe(comp_ctx->dev, comp_ctx->cq_handle, &comp_ctx->cqe, wc);
g_free(ctx);
}
static void complete_with_error(uint32_t vendor_err, void *ctx)
{
struct ibv_wc wc = {};
wc.status = IBV_WC_GENERAL_ERR;
wc.vendor_err = vendor_err;
pvrdma_qp_ops_comp_handler(ctx, &wc);
}
void pvrdma_qp_ops_fini(void)
{
rdma_backend_unregister_comp_handler();
}
int pvrdma_qp_ops_init(void)
{
rdma_backend_register_comp_handler(pvrdma_qp_ops_comp_handler);
return 0;
}
void pvrdma_qp_send(PVRDMADev *dev, uint32_t qp_handle)
{
RdmaRmQP *qp;
PvrdmaSqWqe *wqe;
PvrdmaRing *ring;
int sgid_idx;
union ibv_gid *sgid;
qp = rdma_rm_get_qp(&dev->rdma_dev_res, qp_handle);
if (unlikely(!qp)) {
return;
}
ring = (PvrdmaRing *)qp->opaque;
wqe = pvrdma_ring_next_elem_read(ring);
while (wqe) {
CompHandlerCtx *comp_ctx;
/* Prepare CQE */
comp_ctx = g_new(CompHandlerCtx, 1);
comp_ctx->dev = dev;
comp_ctx->cq_handle = qp->send_cq_handle;
comp_ctx->cqe.wr_id = wqe->hdr.wr_id;
comp_ctx->cqe.qp = qp_handle;
comp_ctx->cqe.opcode = IBV_WC_SEND;
sgid = rdma_rm_get_gid(&dev->rdma_dev_res, wqe->hdr.wr.ud.av.gid_index);
if (!sgid) {
rdma_error_report("Failed to get gid for idx %d",
wqe->hdr.wr.ud.av.gid_index);
complete_with_error(VENDOR_ERR_INV_GID_IDX, comp_ctx);
continue;
}
sgid_idx = rdma_rm_get_backend_gid_index(&dev->rdma_dev_res,
&dev->backend_dev,
wqe->hdr.wr.ud.av.gid_index);
if (sgid_idx <= 0) {
rdma_error_report("Failed to get bk sgid_idx for sgid_idx %d",
wqe->hdr.wr.ud.av.gid_index);
complete_with_error(VENDOR_ERR_INV_GID_IDX, comp_ctx);
continue;
}
if (wqe->hdr.num_sge > dev->dev_attr.max_sge) {
rdma_error_report("Invalid num_sge=%d (max %d)", wqe->hdr.num_sge,
dev->dev_attr.max_sge);
complete_with_error(VENDOR_ERR_INV_NUM_SGE, comp_ctx);
continue;
}
rdma_backend_post_send(&dev->backend_dev, &qp->backend_qp, qp->qp_type,
(struct ibv_sge *)&wqe->sge[0], wqe->hdr.num_sge,
sgid_idx, sgid,
(union ibv_gid *)wqe->hdr.wr.ud.av.dgid,
wqe->hdr.wr.ud.remote_qpn,
wqe->hdr.wr.ud.remote_qkey, comp_ctx);
pvrdma_ring_read_inc(ring);
wqe = pvrdma_ring_next_elem_read(ring);
}
}
void pvrdma_qp_recv(PVRDMADev *dev, uint32_t qp_handle)
{
RdmaRmQP *qp;
PvrdmaRqWqe *wqe;
PvrdmaRing *ring;
qp = rdma_rm_get_qp(&dev->rdma_dev_res, qp_handle);
if (unlikely(!qp)) {
return;
}
ring = &((PvrdmaRing *)qp->opaque)[1];
wqe = pvrdma_ring_next_elem_read(ring);
while (wqe) {
CompHandlerCtx *comp_ctx;
/* Prepare CQE */
comp_ctx = g_new(CompHandlerCtx, 1);
comp_ctx->dev = dev;
comp_ctx->cq_handle = qp->recv_cq_handle;
comp_ctx->cqe.wr_id = wqe->hdr.wr_id;
comp_ctx->cqe.qp = qp_handle;
comp_ctx->cqe.opcode = IBV_WC_RECV;
if (wqe->hdr.num_sge > dev->dev_attr.max_sge) {
rdma_error_report("Invalid num_sge=%d (max %d)", wqe->hdr.num_sge,
dev->dev_attr.max_sge);
complete_with_error(VENDOR_ERR_INV_NUM_SGE, comp_ctx);
continue;
}
rdma_backend_post_recv(&dev->backend_dev, &qp->backend_qp, qp->qp_type,
(struct ibv_sge *)&wqe->sge[0], wqe->hdr.num_sge,
comp_ctx);
pvrdma_ring_read_inc(ring);
wqe = pvrdma_ring_next_elem_read(ring);
}
}
void pvrdma_srq_recv(PVRDMADev *dev, uint32_t srq_handle)
{
RdmaRmSRQ *srq;
PvrdmaRqWqe *wqe;
PvrdmaRing *ring;
srq = rdma_rm_get_srq(&dev->rdma_dev_res, srq_handle);
if (unlikely(!srq)) {
return;
}
ring = (PvrdmaRing *)srq->opaque;
wqe = pvrdma_ring_next_elem_read(ring);
while (wqe) {
CompHandlerCtx *comp_ctx;
/* Prepare CQE */
comp_ctx = g_new(CompHandlerCtx, 1);
comp_ctx->dev = dev;
comp_ctx->cq_handle = srq->recv_cq_handle;
comp_ctx->cqe.wr_id = wqe->hdr.wr_id;
comp_ctx->cqe.qp = 0;
comp_ctx->cqe.opcode = IBV_WC_RECV;
if (wqe->hdr.num_sge > dev->dev_attr.max_sge) {
rdma_error_report("Invalid num_sge=%d (max %d)", wqe->hdr.num_sge,
dev->dev_attr.max_sge);
complete_with_error(VENDOR_ERR_INV_NUM_SGE, comp_ctx);
continue;
}
rdma_backend_post_srq_recv(&dev->backend_dev, &srq->backend_srq,
(struct ibv_sge *)&wqe->sge[0],
wqe->hdr.num_sge,
comp_ctx);
pvrdma_ring_read_inc(ring);
wqe = pvrdma_ring_next_elem_read(ring);
}
}
void pvrdma_cq_poll(RdmaDeviceResources *dev_res, uint32_t cq_handle)
{
RdmaRmCQ *cq;
cq = rdma_rm_get_cq(dev_res, cq_handle);
if (!cq) {
return;
}
rdma_backend_poll_cq(dev_res, &cq->backend_cq);
}

View File

@ -1,28 +0,0 @@
/*
* QEMU VMWARE paravirtual RDMA QP Operations
*
* Copyright (C) 2018 Oracle
* Copyright (C) 2018 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
* Marcel Apfelbaum <marcel@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef PVRDMA_QP_OPS_H
#define PVRDMA_QP_OPS_H
#include "pvrdma.h"
int pvrdma_qp_ops_init(void);
void pvrdma_qp_ops_fini(void);
void pvrdma_qp_send(PVRDMADev *dev, uint32_t qp_handle);
void pvrdma_qp_recv(PVRDMADev *dev, uint32_t qp_handle);
void pvrdma_srq_recv(PVRDMADev *dev, uint32_t srq_handle);
void pvrdma_cq_poll(RdmaDeviceResources *dev_res, uint32_t cq_handle);
#endif

View File

@ -1,17 +0,0 @@
# See docs/devel/tracing.rst for syntax documentation.
# pvrdma_main.c
pvrdma_regs_read(uint64_t addr, uint64_t val) "pvrdma.regs[0x%"PRIx64"]=0x%"PRIx64
pvrdma_regs_write(uint64_t addr, uint64_t val, const char *reg_name, const char *val_name) "pvrdma.regs[0x%"PRIx64"]=0x%"PRIx64" (%s %s)"
pvrdma_uar_write(uint64_t addr, uint64_t val, const char *reg_name, const char *val_name, int val1, int val2) "uar[0x%"PRIx64"]=0x%"PRIx64" (cls=%s, op=%s, obj=%d, val=%d)"
# pvrdma_cmd.c
pvrdma_map_to_pdir_host_virt(void *vfirst, void *vremaped) "mremap %p -> %p"
pvrdma_map_to_pdir_next_page(int page_idx, void *vnext, void *vremaped) "mremap [%d] %p -> %p"
pvrdma_exec_cmd(int cmd, int err) "cmd=%d, err=%d"
# pvrdma_dev_ring.c
pvrdma_ring_next_elem_read_no_data(char *ring_name) "pvrdma_ring %s is empty"
# pvrdma_qp_ops.c
pvrdma_post_cqe(uint32_t cq_handle, int notify, uint64_t wr_id, uint64_t qpn, uint32_t op_code, uint32_t status, uint32_t byte_len, uint32_t src_qp, uint32_t wc_flags, uint32_t vendor_err) "cq_handle=%d, notify=%d, wr_id=0x%"PRIx64", qpn=0x%"PRIx64", opcode=%d, status=%d, byte_len=%d, src_qp=%d, wc_flags=%d, vendor_err=%d"

View File

@ -1 +0,0 @@
#include "trace/trace-hw_rdma_vmw.h"

View File

@ -17,10 +17,6 @@ config I8254
bool
depends on ISA_BUS
config ALTERA_TIMER
bool
select PTIMER
config ALLWINNER_A10_PIT
bool
select PTIMER

View File

@ -1,244 +0,0 @@
/*
* QEMU model of the Altera timer.
*
* Copyright (c) 2012 Chris Wulff <crwulff@gmail.com>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see
* <http://www.gnu.org/licenses/lgpl-2.1.html>
*/
#include "qemu/osdep.h"
#include "qemu/module.h"
#include "qapi/error.h"
#include "hw/sysbus.h"
#include "hw/irq.h"
#include "hw/ptimer.h"
#include "hw/qdev-properties.h"
#include "qom/object.h"
#define R_STATUS 0
#define R_CONTROL 1
#define R_PERIODL 2
#define R_PERIODH 3
#define R_SNAPL 4
#define R_SNAPH 5
#define R_MAX 6
#define STATUS_TO 0x0001
#define STATUS_RUN 0x0002
#define CONTROL_ITO 0x0001
#define CONTROL_CONT 0x0002
#define CONTROL_START 0x0004
#define CONTROL_STOP 0x0008
#define TYPE_ALTERA_TIMER "ALTR.timer"
OBJECT_DECLARE_SIMPLE_TYPE(AlteraTimer, ALTERA_TIMER)
struct AlteraTimer {
SysBusDevice busdev;
MemoryRegion mmio;
qemu_irq irq;
uint32_t freq_hz;
ptimer_state *ptimer;
uint32_t regs[R_MAX];
};
static int timer_irq_state(AlteraTimer *t)
{
bool irq = (t->regs[R_STATUS] & STATUS_TO) &&
(t->regs[R_CONTROL] & CONTROL_ITO);
return irq;
}
static uint64_t timer_read(void *opaque, hwaddr addr,
unsigned int size)
{
AlteraTimer *t = opaque;
uint64_t r = 0;
addr >>= 2;
switch (addr) {
case R_CONTROL:
r = t->regs[R_CONTROL] & (CONTROL_ITO | CONTROL_CONT);
break;
default:
if (addr < ARRAY_SIZE(t->regs)) {
r = t->regs[addr];
}
break;
}
return r;
}
static void timer_write(void *opaque, hwaddr addr,
uint64_t value, unsigned int size)
{
AlteraTimer *t = opaque;
uint64_t tvalue;
uint32_t count = 0;
int irqState = timer_irq_state(t);
addr >>= 2;
switch (addr) {
case R_STATUS:
/* The timeout bit is cleared by writing the status register. */
t->regs[R_STATUS] &= ~STATUS_TO;
break;
case R_CONTROL:
ptimer_transaction_begin(t->ptimer);
t->regs[R_CONTROL] = value & (CONTROL_ITO | CONTROL_CONT);
if ((value & CONTROL_START) &&
!(t->regs[R_STATUS] & STATUS_RUN)) {
ptimer_run(t->ptimer, 1);
t->regs[R_STATUS] |= STATUS_RUN;
}
if ((value & CONTROL_STOP) && (t->regs[R_STATUS] & STATUS_RUN)) {
ptimer_stop(t->ptimer);
t->regs[R_STATUS] &= ~STATUS_RUN;
}
ptimer_transaction_commit(t->ptimer);
break;
case R_PERIODL:
case R_PERIODH:
ptimer_transaction_begin(t->ptimer);
t->regs[addr] = value & 0xFFFF;
if (t->regs[R_STATUS] & STATUS_RUN) {
ptimer_stop(t->ptimer);
t->regs[R_STATUS] &= ~STATUS_RUN;
}
tvalue = (t->regs[R_PERIODH] << 16) | t->regs[R_PERIODL];
ptimer_set_limit(t->ptimer, tvalue + 1, 1);
ptimer_transaction_commit(t->ptimer);
break;
case R_SNAPL:
case R_SNAPH:
count = ptimer_get_count(t->ptimer);
t->regs[R_SNAPL] = count & 0xFFFF;
t->regs[R_SNAPH] = count >> 16;
break;
default:
break;
}
if (irqState != timer_irq_state(t)) {
qemu_set_irq(t->irq, timer_irq_state(t));
}
}
static const MemoryRegionOps timer_ops = {
.read = timer_read,
.write = timer_write,
.endianness = DEVICE_NATIVE_ENDIAN,
.valid = {
.min_access_size = 1,
.max_access_size = 4
}
};
static void timer_hit(void *opaque)
{
AlteraTimer *t = opaque;
const uint64_t tvalue = (t->regs[R_PERIODH] << 16) | t->regs[R_PERIODL];
t->regs[R_STATUS] |= STATUS_TO;
ptimer_set_limit(t->ptimer, tvalue + 1, 1);
if (!(t->regs[R_CONTROL] & CONTROL_CONT)) {
t->regs[R_STATUS] &= ~STATUS_RUN;
ptimer_set_count(t->ptimer, tvalue);
} else {
ptimer_run(t->ptimer, 1);
}
qemu_set_irq(t->irq, timer_irq_state(t));
}
static void altera_timer_realize(DeviceState *dev, Error **errp)
{
AlteraTimer *t = ALTERA_TIMER(dev);
SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
if (t->freq_hz == 0) {
error_setg(errp, "\"clock-frequency\" property must be provided.");
return;
}
t->ptimer = ptimer_init(timer_hit, t, PTIMER_POLICY_LEGACY);
ptimer_transaction_begin(t->ptimer);
ptimer_set_freq(t->ptimer, t->freq_hz);
ptimer_transaction_commit(t->ptimer);
memory_region_init_io(&t->mmio, OBJECT(t), &timer_ops, t,
TYPE_ALTERA_TIMER, R_MAX * sizeof(uint32_t));
sysbus_init_mmio(sbd, &t->mmio);
}
static void altera_timer_init(Object *obj)
{
AlteraTimer *t = ALTERA_TIMER(obj);
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
sysbus_init_irq(sbd, &t->irq);
}
static void altera_timer_reset(DeviceState *dev)
{
AlteraTimer *t = ALTERA_TIMER(dev);
ptimer_transaction_begin(t->ptimer);
ptimer_stop(t->ptimer);
ptimer_set_limit(t->ptimer, 0xffffffff, 1);
ptimer_transaction_commit(t->ptimer);
memset(t->regs, 0, sizeof(t->regs));
}
static Property altera_timer_properties[] = {
DEFINE_PROP_UINT32("clock-frequency", AlteraTimer, freq_hz, 0),
DEFINE_PROP_END_OF_LIST(),
};
static void altera_timer_class_init(ObjectClass *klass, void *data)
{
DeviceClass *dc = DEVICE_CLASS(klass);
dc->realize = altera_timer_realize;
device_class_set_props(dc, altera_timer_properties);
dc->reset = altera_timer_reset;
}
static const TypeInfo altera_timer_info = {
.name = TYPE_ALTERA_TIMER,
.parent = TYPE_SYS_BUS_DEVICE,
.instance_size = sizeof(AlteraTimer),
.instance_init = altera_timer_init,
.class_init = altera_timer_class_init,
};
static void altera_timer_register(void)
{
type_register_static(&altera_timer_info);
}
type_init(altera_timer_register)

View File

@ -1,6 +1,5 @@
system_ss.add(when: 'CONFIG_A9_GTIMER', if_true: files('a9gtimer.c'))
system_ss.add(when: 'CONFIG_ALLWINNER_A10_PIT', if_true: files('allwinner-a10-pit.c'))
system_ss.add(when: 'CONFIG_ALTERA_TIMER', if_true: files('altera_timer.c'))
system_ss.add(when: 'CONFIG_ARM_MPTIMER', if_true: files('arm_mptimer.c'))
system_ss.add(when: 'CONFIG_ARM_TIMER', if_true: files('arm_timer.c'))
system_ss.add(when: 'CONFIG_ARM_V7M', if_true: files('armv7m_systick.c'))

View File

@ -241,10 +241,6 @@ enum bfd_architecture
bfd_arch_ia64, /* HP/Intel ia64 */
#define bfd_mach_ia64_elf64 64
#define bfd_mach_ia64_elf32 32
bfd_arch_nios2, /* Nios II */
#define bfd_mach_nios2 0
#define bfd_mach_nios2r1 1
#define bfd_mach_nios2r2 2
bfd_arch_rx, /* Renesas RX */
#define bfd_mach_rx 0x75
#define bfd_mach_rx_v2 0x76
@ -456,7 +452,6 @@ int print_insn_crisv32 (bfd_vma, disassemble_info*);
int print_insn_crisv10 (bfd_vma, disassemble_info*);
int print_insn_microblaze (bfd_vma, disassemble_info*);
int print_insn_ia64 (bfd_vma, disassemble_info*);
int print_insn_nios2(bfd_vma, disassemble_info*);
int print_insn_xtensa (bfd_vma, disassemble_info*);
int print_insn_riscv32 (bfd_vma, disassemble_info*);
int print_insn_riscv64 (bfd_vma, disassemble_info*);

View File

@ -22,7 +22,6 @@
#pragma GCC poison TARGET_ABI_MIPSO32
#pragma GCC poison TARGET_MIPS64
#pragma GCC poison TARGET_ABI_MIPSN64
#pragma GCC poison TARGET_NIOS2
#pragma GCC poison TARGET_OPENRISC
#pragma GCC poison TARGET_PPC
#pragma GCC poison TARGET_PPC64
@ -73,7 +72,6 @@
#pragma GCC poison CONFIG_M68K_DIS
#pragma GCC poison CONFIG_MICROBLAZE_DIS
#pragma GCC poison CONFIG_MIPS_DIS
#pragma GCC poison CONFIG_NIOS2_DIS
#pragma GCC poison CONFIG_PPC_DIS
#pragma GCC poison CONFIG_RISCV_DIS
#pragma GCC poison CONFIG_S390_DIS

View File

@ -25,8 +25,7 @@
#if (defined(TARGET_I386) && !defined(TARGET_X86_64)) \
|| defined(TARGET_SH4) \
|| defined(TARGET_OPENRISC) \
|| defined(TARGET_MICROBLAZE) \
|| defined(TARGET_NIOS2)
|| defined(TARGET_MICROBLAZE)
#define ABI_LLONG_ALIGNMENT 4
#endif

View File

@ -1,66 +0,0 @@
/*
* Vectored Interrupt Controller for nios2 processor
*
* Copyright (c) 2022 Neuroblade
*
* Interface:
* QOM property "cpu": link to the Nios2 CPU (must be set)
* Unnamed GPIO inputs 0..NIOS2_VIC_MAX_IRQ-1: input IRQ lines
* IRQ should be connected to nios2 IRQ0.
*
* Reference: "Embedded Peripherals IP User Guide
* for Intel® Quartus® Prime Design Suite: 21.4"
* Chapter 38 "Vectored Interrupt Controller Core"
* See: https://www.intel.com/content/www/us/en/docs/programmable/683130/21-4/vectored-interrupt-controller-core.html
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#ifndef HW_INTC_NIOS2_VIC_H
#define HW_INTC_NIOS2_VIC_H
#include "hw/sysbus.h"
#define TYPE_NIOS2_VIC "nios2-vic"
OBJECT_DECLARE_SIMPLE_TYPE(Nios2VIC, NIOS2_VIC)
#define NIOS2_VIC_MAX_IRQ 32
struct Nios2VIC {
/*< private >*/
SysBusDevice parent_obj;
/*< public >*/
qemu_irq output_int;
/* properties */
CPUState *cpu;
MemoryRegion csr;
uint32_t int_config[NIOS2_VIC_MAX_IRQ];
uint32_t vic_config;
uint32_t int_raw_status;
uint32_t int_enable;
uint32_t sw_int;
uint32_t vic_status;
uint32_t vec_tbl_base;
uint32_t vec_tbl_addr;
};
#endif /* HW_INTC_NIOS2_VIC_H */

View File

@ -1,37 +0,0 @@
/*
* RDMA device interface
*
* Copyright (C) 2019 Oracle
* Copyright (C) 2019 Red Hat Inc
*
* Authors:
* Yuval Shaia <yuval.shaia@oracle.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#ifndef RDMA_H
#define RDMA_H
#include "qom/object.h"
#define INTERFACE_RDMA_PROVIDER "rdma"
typedef struct RdmaProviderClass RdmaProviderClass;
DECLARE_CLASS_CHECKERS(RdmaProviderClass, RDMA_PROVIDER,
INTERFACE_RDMA_PROVIDER)
#define RDMA_PROVIDER(obj) \
INTERFACE_CHECK(RdmaProvider, (obj), \
INTERFACE_RDMA_PROVIDER)
typedef struct RdmaProvider RdmaProvider;
struct RdmaProviderClass {
InterfaceClass parent;
void (*format_statistics)(RdmaProvider *obj, GString *buf);
};
#endif

View File

@ -37,7 +37,6 @@ void hmp_info_spice(Monitor *mon, const QDict *qdict);
void hmp_info_balloon(Monitor *mon, const QDict *qdict);
void hmp_info_irq(Monitor *mon, const QDict *qdict);
void hmp_info_pic(Monitor *mon, const QDict *qdict);
void hmp_info_rdma(Monitor *mon, const QDict *qdict);
void hmp_info_pci(Monitor *mon, const QDict *qdict);
void hmp_info_tpm(Monitor *mon, const QDict *qdict);
void hmp_info_iothreads(Monitor *mon, const QDict *qdict);

View File

@ -1,685 +0,0 @@
/*
* Copyright (c) 2012-2016 VMware, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of EITHER the GNU General Public License
* version 2 as published by the Free Software Foundation or the BSD
* 2-Clause License. This program is distributed in the hope that it
* will be useful, but WITHOUT ANY WARRANTY; WITHOUT EVEN THE IMPLIED
* WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
* See the GNU General Public License version 2 for more details at
* http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html.
*
* You should have received a copy of the GNU General Public License
* along with this program available in the file COPYING in the main
* directory of this source tree.
*
* The BSD 2-Clause License
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __PVRDMA_DEV_API_H__
#define __PVRDMA_DEV_API_H__
#include "standard-headers/linux/types.h"
#include "pvrdma_verbs.h"
/*
* PVRDMA version macros. Some new features require updates to PVRDMA_VERSION.
* These macros allow us to check for different features if necessary.
*/
#define PVRDMA_ROCEV1_VERSION 17
#define PVRDMA_ROCEV2_VERSION 18
#define PVRDMA_PPN64_VERSION 19
#define PVRDMA_QPHANDLE_VERSION 20
#define PVRDMA_VERSION PVRDMA_QPHANDLE_VERSION
#define PVRDMA_BOARD_ID 1
#define PVRDMA_REV_ID 1
/*
* Masks and accessors for page directory, which is a two-level lookup:
* page directory -> page table -> page. Only one directory for now, but we
* could expand that easily. 9 bits for tables, 9 bits for pages, gives one
* gigabyte for memory regions and so forth.
*/
#define PVRDMA_PDIR_SHIFT 18
#define PVRDMA_PTABLE_SHIFT 9
#define PVRDMA_PAGE_DIR_DIR(x) (((x) >> PVRDMA_PDIR_SHIFT) & 0x1)
#define PVRDMA_PAGE_DIR_TABLE(x) (((x) >> PVRDMA_PTABLE_SHIFT) & 0x1ff)
#define PVRDMA_PAGE_DIR_PAGE(x) ((x) & 0x1ff)
#define PVRDMA_PAGE_DIR_MAX_PAGES (1 * 512 * 512)
#define PVRDMA_MAX_FAST_REG_PAGES 128
/*
* Max MSI-X vectors.
*/
#define PVRDMA_MAX_INTERRUPTS 3
/* Register offsets within PCI resource on BAR1. */
#define PVRDMA_REG_VERSION 0x00 /* R: Version of device. */
#define PVRDMA_REG_DSRLOW 0x04 /* W: Device shared region low PA. */
#define PVRDMA_REG_DSRHIGH 0x08 /* W: Device shared region high PA. */
#define PVRDMA_REG_CTL 0x0c /* W: PVRDMA_DEVICE_CTL */
#define PVRDMA_REG_REQUEST 0x10 /* W: Indicate device request. */
#define PVRDMA_REG_ERR 0x14 /* R: Device error. */
#define PVRDMA_REG_ICR 0x18 /* R: Interrupt cause. */
#define PVRDMA_REG_IMR 0x1c /* R/W: Interrupt mask. */
#define PVRDMA_REG_MACL 0x20 /* R/W: MAC address low. */
#define PVRDMA_REG_MACH 0x24 /* R/W: MAC address high. */
/* Object flags. */
#define PVRDMA_CQ_FLAG_ARMED_SOL BIT(0) /* Armed for solicited-only. */
#define PVRDMA_CQ_FLAG_ARMED BIT(1) /* Armed. */
#define PVRDMA_MR_FLAG_DMA BIT(0) /* DMA region. */
#define PVRDMA_MR_FLAG_FRMR BIT(1) /* Fast reg memory region. */
/*
* Atomic operation capability (masked versions are extended atomic
* operations.
*/
#define PVRDMA_ATOMIC_OP_COMP_SWAP BIT(0) /* Compare and swap. */
#define PVRDMA_ATOMIC_OP_FETCH_ADD BIT(1) /* Fetch and add. */
#define PVRDMA_ATOMIC_OP_MASK_COMP_SWAP BIT(2) /* Masked compare and swap. */
#define PVRDMA_ATOMIC_OP_MASK_FETCH_ADD BIT(3) /* Masked fetch and add. */
/*
* Base Memory Management Extension flags to support Fast Reg Memory Regions
* and Fast Reg Work Requests. Each flag represents a verb operation and we
* must support all of them to qualify for the BMME device cap.
*/
#define PVRDMA_BMME_FLAG_LOCAL_INV BIT(0) /* Local Invalidate. */
#define PVRDMA_BMME_FLAG_REMOTE_INV BIT(1) /* Remote Invalidate. */
#define PVRDMA_BMME_FLAG_FAST_REG_WR BIT(2) /* Fast Reg Work Request. */
/*
* GID types. The interpretation of the gid_types bit field in the device
* capabilities will depend on the device mode. For now, the device only
* supports RoCE as mode, so only the different GID types for RoCE are
* defined.
*/
#define PVRDMA_GID_TYPE_FLAG_ROCE_V1 BIT(0)
#define PVRDMA_GID_TYPE_FLAG_ROCE_V2 BIT(1)
/*
* Version checks. This checks whether each version supports specific
* capabilities from the device.
*/
#define PVRDMA_IS_VERSION17(_dev) \
(_dev->dsr_version == PVRDMA_ROCEV1_VERSION && \
_dev->dsr->caps.gid_types == PVRDMA_GID_TYPE_FLAG_ROCE_V1)
#define PVRDMA_IS_VERSION18(_dev) \
(_dev->dsr_version >= PVRDMA_ROCEV2_VERSION && \
(_dev->dsr->caps.gid_types == PVRDMA_GID_TYPE_FLAG_ROCE_V1 || \
_dev->dsr->caps.gid_types == PVRDMA_GID_TYPE_FLAG_ROCE_V2)) \
#define PVRDMA_SUPPORTED(_dev) \
((_dev->dsr->caps.mode == PVRDMA_DEVICE_MODE_ROCE) && \
(PVRDMA_IS_VERSION17(_dev) || PVRDMA_IS_VERSION18(_dev)))
/*
* Get capability values based on device version.
*/
#define PVRDMA_GET_CAP(_dev, _old_val, _val) \
((PVRDMA_IS_VERSION18(_dev)) ? _val : _old_val)
enum pvrdma_pci_resource {
PVRDMA_PCI_RESOURCE_MSIX, /* BAR0: MSI-X, MMIO. */
PVRDMA_PCI_RESOURCE_REG, /* BAR1: Registers, MMIO. */
PVRDMA_PCI_RESOURCE_UAR, /* BAR2: UAR pages, MMIO, 64-bit. */
PVRDMA_PCI_RESOURCE_LAST, /* Last. */
};
enum pvrdma_device_ctl {
PVRDMA_DEVICE_CTL_ACTIVATE, /* Activate device. */
PVRDMA_DEVICE_CTL_UNQUIESCE, /* Unquiesce device. */
PVRDMA_DEVICE_CTL_RESET, /* Reset device. */
};
enum pvrdma_intr_vector {
PVRDMA_INTR_VECTOR_RESPONSE, /* Command response. */
PVRDMA_INTR_VECTOR_ASYNC, /* Async events. */
PVRDMA_INTR_VECTOR_CQ, /* CQ notification. */
/* Additional CQ notification vectors. */
};
enum pvrdma_intr_cause {
PVRDMA_INTR_CAUSE_RESPONSE = (1 << PVRDMA_INTR_VECTOR_RESPONSE),
PVRDMA_INTR_CAUSE_ASYNC = (1 << PVRDMA_INTR_VECTOR_ASYNC),
PVRDMA_INTR_CAUSE_CQ = (1 << PVRDMA_INTR_VECTOR_CQ),
};
enum pvrdma_gos_bits {
PVRDMA_GOS_BITS_UNK, /* Unknown. */
PVRDMA_GOS_BITS_32, /* 32-bit. */
PVRDMA_GOS_BITS_64, /* 64-bit. */
};
enum pvrdma_gos_type {
PVRDMA_GOS_TYPE_UNK, /* Unknown. */
PVRDMA_GOS_TYPE_LINUX, /* Linux. */
};
enum pvrdma_device_mode {
PVRDMA_DEVICE_MODE_ROCE, /* RoCE. */
PVRDMA_DEVICE_MODE_IWARP, /* iWarp. */
PVRDMA_DEVICE_MODE_IB, /* InfiniBand. */
};
struct pvrdma_gos_info {
uint32_t gos_bits:2; /* W: PVRDMA_GOS_BITS_ */
uint32_t gos_type:4; /* W: PVRDMA_GOS_TYPE_ */
uint32_t gos_ver:16; /* W: Guest OS version. */
uint32_t gos_misc:10; /* W: Other. */
uint32_t pad; /* Pad to 8-byte alignment. */
};
struct pvrdma_device_caps {
uint64_t fw_ver; /* R: Query device. */
uint64_t node_guid;
uint64_t sys_image_guid;
uint64_t max_mr_size;
uint64_t page_size_cap;
uint64_t atomic_arg_sizes; /* EX verbs. */
uint32_t ex_comp_mask; /* EX verbs. */
uint32_t device_cap_flags2; /* EX verbs. */
uint32_t max_fa_bit_boundary; /* EX verbs. */
uint32_t log_max_atomic_inline_arg; /* EX verbs. */
uint32_t vendor_id;
uint32_t vendor_part_id;
uint32_t hw_ver;
uint32_t max_qp;
uint32_t max_qp_wr;
uint32_t device_cap_flags;
uint32_t max_sge;
uint32_t max_sge_rd;
uint32_t max_cq;
uint32_t max_cqe;
uint32_t max_mr;
uint32_t max_pd;
uint32_t max_qp_rd_atom;
uint32_t max_ee_rd_atom;
uint32_t max_res_rd_atom;
uint32_t max_qp_init_rd_atom;
uint32_t max_ee_init_rd_atom;
uint32_t max_ee;
uint32_t max_rdd;
uint32_t max_mw;
uint32_t max_raw_ipv6_qp;
uint32_t max_raw_ethy_qp;
uint32_t max_mcast_grp;
uint32_t max_mcast_qp_attach;
uint32_t max_total_mcast_qp_attach;
uint32_t max_ah;
uint32_t max_fmr;
uint32_t max_map_per_fmr;
uint32_t max_srq;
uint32_t max_srq_wr;
uint32_t max_srq_sge;
uint32_t max_uar;
uint32_t gid_tbl_len;
uint16_t max_pkeys;
uint8_t local_ca_ack_delay;
uint8_t phys_port_cnt;
uint8_t mode; /* PVRDMA_DEVICE_MODE_ */
uint8_t atomic_ops; /* PVRDMA_ATOMIC_OP_* bits */
uint8_t bmme_flags; /* FRWR Mem Mgmt Extensions */
uint8_t gid_types; /* PVRDMA_GID_TYPE_FLAG_ */
uint32_t max_fast_reg_page_list_len;
};
struct pvrdma_ring_page_info {
uint32_t num_pages; /* Num pages incl. header. */
uint32_t reserved; /* Reserved. */
uint64_t pdir_dma; /* Page directory PA. */
};
#pragma pack(push, 1)
struct pvrdma_device_shared_region {
uint32_t driver_version; /* W: Driver version. */
uint32_t pad; /* Pad to 8-byte align. */
struct pvrdma_gos_info gos_info; /* W: Guest OS information. */
uint64_t cmd_slot_dma; /* W: Command slot address. */
uint64_t resp_slot_dma; /* W: Response slot address. */
struct pvrdma_ring_page_info async_ring_pages;
/* W: Async ring page info. */
struct pvrdma_ring_page_info cq_ring_pages;
/* W: CQ ring page info. */
union {
uint32_t uar_pfn; /* W: UAR pageframe. */
uint64_t uar_pfn64; /* W: 64-bit UAR page frame. */
};
struct pvrdma_device_caps caps; /* R: Device capabilities. */
};
#pragma pack(pop)
/* Event types. Currently a 1:1 mapping with enum ib_event. */
enum pvrdma_eqe_type {
PVRDMA_EVENT_CQ_ERR,
PVRDMA_EVENT_QP_FATAL,
PVRDMA_EVENT_QP_REQ_ERR,
PVRDMA_EVENT_QP_ACCESS_ERR,
PVRDMA_EVENT_COMM_EST,
PVRDMA_EVENT_SQ_DRAINED,
PVRDMA_EVENT_PATH_MIG,
PVRDMA_EVENT_PATH_MIG_ERR,
PVRDMA_EVENT_DEVICE_FATAL,
PVRDMA_EVENT_PORT_ACTIVE,
PVRDMA_EVENT_PORT_ERR,
PVRDMA_EVENT_LID_CHANGE,
PVRDMA_EVENT_PKEY_CHANGE,
PVRDMA_EVENT_SM_CHANGE,
PVRDMA_EVENT_SRQ_ERR,
PVRDMA_EVENT_SRQ_LIMIT_REACHED,
PVRDMA_EVENT_QP_LAST_WQE_REACHED,
PVRDMA_EVENT_CLIENT_REREGISTER,
PVRDMA_EVENT_GID_CHANGE,
};
/* Event queue element. */
struct pvrdma_eqe {
uint32_t type; /* Event type. */
uint32_t info; /* Handle, other. */
};
/* CQ notification queue element. */
struct pvrdma_cqne {
uint32_t info; /* Handle */
};
enum {
PVRDMA_CMD_FIRST,
PVRDMA_CMD_QUERY_PORT = PVRDMA_CMD_FIRST,
PVRDMA_CMD_QUERY_PKEY,
PVRDMA_CMD_CREATE_PD,
PVRDMA_CMD_DESTROY_PD,
PVRDMA_CMD_CREATE_MR,
PVRDMA_CMD_DESTROY_MR,
PVRDMA_CMD_CREATE_CQ,
PVRDMA_CMD_RESIZE_CQ,
PVRDMA_CMD_DESTROY_CQ,
PVRDMA_CMD_CREATE_QP,
PVRDMA_CMD_MODIFY_QP,
PVRDMA_CMD_QUERY_QP,
PVRDMA_CMD_DESTROY_QP,
PVRDMA_CMD_CREATE_UC,
PVRDMA_CMD_DESTROY_UC,
PVRDMA_CMD_CREATE_BIND,
PVRDMA_CMD_DESTROY_BIND,
PVRDMA_CMD_CREATE_SRQ,
PVRDMA_CMD_MODIFY_SRQ,
PVRDMA_CMD_QUERY_SRQ,
PVRDMA_CMD_DESTROY_SRQ,
PVRDMA_CMD_MAX,
};
enum {
PVRDMA_CMD_FIRST_RESP = (1 << 31),
PVRDMA_CMD_QUERY_PORT_RESP = PVRDMA_CMD_FIRST_RESP,
PVRDMA_CMD_QUERY_PKEY_RESP,
PVRDMA_CMD_CREATE_PD_RESP,
PVRDMA_CMD_DESTROY_PD_RESP_NOOP,
PVRDMA_CMD_CREATE_MR_RESP,
PVRDMA_CMD_DESTROY_MR_RESP_NOOP,
PVRDMA_CMD_CREATE_CQ_RESP,
PVRDMA_CMD_RESIZE_CQ_RESP,
PVRDMA_CMD_DESTROY_CQ_RESP_NOOP,
PVRDMA_CMD_CREATE_QP_RESP,
PVRDMA_CMD_MODIFY_QP_RESP,
PVRDMA_CMD_QUERY_QP_RESP,
PVRDMA_CMD_DESTROY_QP_RESP,
PVRDMA_CMD_CREATE_UC_RESP,
PVRDMA_CMD_DESTROY_UC_RESP_NOOP,
PVRDMA_CMD_CREATE_BIND_RESP_NOOP,
PVRDMA_CMD_DESTROY_BIND_RESP_NOOP,
PVRDMA_CMD_CREATE_SRQ_RESP,
PVRDMA_CMD_MODIFY_SRQ_RESP,
PVRDMA_CMD_QUERY_SRQ_RESP,
PVRDMA_CMD_DESTROY_SRQ_RESP,
PVRDMA_CMD_MAX_RESP,
};
struct pvrdma_cmd_hdr {
uint64_t response; /* Key for response lookup. */
uint32_t cmd; /* PVRDMA_CMD_ */
uint32_t reserved; /* Reserved. */
};
struct pvrdma_cmd_resp_hdr {
uint64_t response; /* From cmd hdr. */
uint32_t ack; /* PVRDMA_CMD_XXX_RESP */
uint8_t err; /* Error. */
uint8_t reserved[3]; /* Reserved. */
};
struct pvrdma_cmd_query_port {
struct pvrdma_cmd_hdr hdr;
uint8_t port_num;
uint8_t reserved[7];
};
struct pvrdma_cmd_query_port_resp {
struct pvrdma_cmd_resp_hdr hdr;
struct pvrdma_port_attr attrs;
};
struct pvrdma_cmd_query_pkey {
struct pvrdma_cmd_hdr hdr;
uint8_t port_num;
uint8_t index;
uint8_t reserved[6];
};
struct pvrdma_cmd_query_pkey_resp {
struct pvrdma_cmd_resp_hdr hdr;
uint16_t pkey;
uint8_t reserved[6];
};
struct pvrdma_cmd_create_uc {
struct pvrdma_cmd_hdr hdr;
union {
uint32_t pfn; /* UAR page frame number */
uint64_t pfn64; /* 64-bit UAR page frame number */
};
};
struct pvrdma_cmd_create_uc_resp {
struct pvrdma_cmd_resp_hdr hdr;
uint32_t ctx_handle;
uint8_t reserved[4];
};
struct pvrdma_cmd_destroy_uc {
struct pvrdma_cmd_hdr hdr;
uint32_t ctx_handle;
uint8_t reserved[4];
};
struct pvrdma_cmd_create_pd {
struct pvrdma_cmd_hdr hdr;
uint32_t ctx_handle;
uint8_t reserved[4];
};
struct pvrdma_cmd_create_pd_resp {
struct pvrdma_cmd_resp_hdr hdr;
uint32_t pd_handle;
uint8_t reserved[4];
};
struct pvrdma_cmd_destroy_pd {
struct pvrdma_cmd_hdr hdr;
uint32_t pd_handle;
uint8_t reserved[4];
};
struct pvrdma_cmd_create_mr {
struct pvrdma_cmd_hdr hdr;
uint64_t start;
uint64_t length;
uint64_t pdir_dma;
uint32_t pd_handle;
uint32_t access_flags;
uint32_t flags;
uint32_t nchunks;
};
struct pvrdma_cmd_create_mr_resp {
struct pvrdma_cmd_resp_hdr hdr;
uint32_t mr_handle;
uint32_t lkey;
uint32_t rkey;
uint8_t reserved[4];
};
struct pvrdma_cmd_destroy_mr {
struct pvrdma_cmd_hdr hdr;
uint32_t mr_handle;
uint8_t reserved[4];
};
struct pvrdma_cmd_create_cq {
struct pvrdma_cmd_hdr hdr;
uint64_t pdir_dma;
uint32_t ctx_handle;
uint32_t cqe;
uint32_t nchunks;
uint8_t reserved[4];
};
struct pvrdma_cmd_create_cq_resp {
struct pvrdma_cmd_resp_hdr hdr;
uint32_t cq_handle;
uint32_t cqe;
};
struct pvrdma_cmd_resize_cq {
struct pvrdma_cmd_hdr hdr;
uint32_t cq_handle;
uint32_t cqe;
};
struct pvrdma_cmd_resize_cq_resp {
struct pvrdma_cmd_resp_hdr hdr;
uint32_t cqe;
uint8_t reserved[4];
};
struct pvrdma_cmd_destroy_cq {
struct pvrdma_cmd_hdr hdr;
uint32_t cq_handle;
uint8_t reserved[4];
};
struct pvrdma_cmd_create_srq {
struct pvrdma_cmd_hdr hdr;
uint64_t pdir_dma;
uint32_t pd_handle;
uint32_t nchunks;
struct pvrdma_srq_attr attrs;
uint8_t srq_type;
uint8_t reserved[7];
};
struct pvrdma_cmd_create_srq_resp {
struct pvrdma_cmd_resp_hdr hdr;
uint32_t srqn;
uint8_t reserved[4];
};
struct pvrdma_cmd_modify_srq {
struct pvrdma_cmd_hdr hdr;
uint32_t srq_handle;
uint32_t attr_mask;
struct pvrdma_srq_attr attrs;
};
struct pvrdma_cmd_query_srq {
struct pvrdma_cmd_hdr hdr;
uint32_t srq_handle;
uint8_t reserved[4];
};
struct pvrdma_cmd_query_srq_resp {
struct pvrdma_cmd_resp_hdr hdr;
struct pvrdma_srq_attr attrs;
};
struct pvrdma_cmd_destroy_srq {
struct pvrdma_cmd_hdr hdr;
uint32_t srq_handle;
uint8_t reserved[4];
};
struct pvrdma_cmd_create_qp {
struct pvrdma_cmd_hdr hdr;
uint64_t pdir_dma;
uint32_t pd_handle;
uint32_t send_cq_handle;
uint32_t recv_cq_handle;
uint32_t srq_handle;
uint32_t max_send_wr;
uint32_t max_recv_wr;
uint32_t max_send_sge;
uint32_t max_recv_sge;
uint32_t max_inline_data;
uint32_t lkey;
uint32_t access_flags;
uint16_t total_chunks;
uint16_t send_chunks;
uint16_t max_atomic_arg;
uint8_t sq_sig_all;
uint8_t qp_type;
uint8_t is_srq;
uint8_t reserved[3];
};
struct pvrdma_cmd_create_qp_resp {
struct pvrdma_cmd_resp_hdr hdr;
uint32_t qpn;
uint32_t max_send_wr;
uint32_t max_recv_wr;
uint32_t max_send_sge;
uint32_t max_recv_sge;
uint32_t max_inline_data;
};
struct pvrdma_cmd_create_qp_resp_v2 {
struct pvrdma_cmd_resp_hdr hdr;
uint32_t qpn;
uint32_t qp_handle;
uint32_t max_send_wr;
uint32_t max_recv_wr;
uint32_t max_send_sge;
uint32_t max_recv_sge;
uint32_t max_inline_data;
};
struct pvrdma_cmd_modify_qp {
struct pvrdma_cmd_hdr hdr;
uint32_t qp_handle;
uint32_t attr_mask;
struct pvrdma_qp_attr attrs;
};
struct pvrdma_cmd_query_qp {
struct pvrdma_cmd_hdr hdr;
uint32_t qp_handle;
uint32_t attr_mask;
};
struct pvrdma_cmd_query_qp_resp {
struct pvrdma_cmd_resp_hdr hdr;
struct pvrdma_qp_attr attrs;
};
struct pvrdma_cmd_destroy_qp {
struct pvrdma_cmd_hdr hdr;
uint32_t qp_handle;
uint8_t reserved[4];
};
struct pvrdma_cmd_destroy_qp_resp {
struct pvrdma_cmd_resp_hdr hdr;
uint32_t events_reported;
uint8_t reserved[4];
};
struct pvrdma_cmd_create_bind {
struct pvrdma_cmd_hdr hdr;
uint32_t mtu;
uint32_t vlan;
uint32_t index;
uint8_t new_gid[16];
uint8_t gid_type;
uint8_t reserved[3];
};
struct pvrdma_cmd_destroy_bind {
struct pvrdma_cmd_hdr hdr;
uint32_t index;
uint8_t dest_gid[16];
uint8_t reserved[4];
};
union pvrdma_cmd_req {
struct pvrdma_cmd_hdr hdr;
struct pvrdma_cmd_query_port query_port;
struct pvrdma_cmd_query_pkey query_pkey;
struct pvrdma_cmd_create_uc create_uc;
struct pvrdma_cmd_destroy_uc destroy_uc;
struct pvrdma_cmd_create_pd create_pd;
struct pvrdma_cmd_destroy_pd destroy_pd;
struct pvrdma_cmd_create_mr create_mr;
struct pvrdma_cmd_destroy_mr destroy_mr;
struct pvrdma_cmd_create_cq create_cq;
struct pvrdma_cmd_resize_cq resize_cq;
struct pvrdma_cmd_destroy_cq destroy_cq;
struct pvrdma_cmd_create_qp create_qp;
struct pvrdma_cmd_modify_qp modify_qp;
struct pvrdma_cmd_query_qp query_qp;
struct pvrdma_cmd_destroy_qp destroy_qp;
struct pvrdma_cmd_create_bind create_bind;
struct pvrdma_cmd_destroy_bind destroy_bind;
struct pvrdma_cmd_create_srq create_srq;
struct pvrdma_cmd_modify_srq modify_srq;
struct pvrdma_cmd_query_srq query_srq;
struct pvrdma_cmd_destroy_srq destroy_srq;
};
union pvrdma_cmd_resp {
struct pvrdma_cmd_resp_hdr hdr;
struct pvrdma_cmd_query_port_resp query_port_resp;
struct pvrdma_cmd_query_pkey_resp query_pkey_resp;
struct pvrdma_cmd_create_uc_resp create_uc_resp;
struct pvrdma_cmd_create_pd_resp create_pd_resp;
struct pvrdma_cmd_create_mr_resp create_mr_resp;
struct pvrdma_cmd_create_cq_resp create_cq_resp;
struct pvrdma_cmd_resize_cq_resp resize_cq_resp;
struct pvrdma_cmd_create_qp_resp create_qp_resp;
struct pvrdma_cmd_create_qp_resp_v2 create_qp_resp_v2;
struct pvrdma_cmd_query_qp_resp query_qp_resp;
struct pvrdma_cmd_destroy_qp_resp destroy_qp_resp;
struct pvrdma_cmd_create_srq_resp create_srq_resp;
struct pvrdma_cmd_query_srq_resp query_srq_resp;
};
#endif /* __PVRDMA_DEV_API_H__ */

View File

@ -1,348 +0,0 @@
/*
* Copyright (c) 2012-2016 VMware, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of EITHER the GNU General Public License
* version 2 as published by the Free Software Foundation or the BSD
* 2-Clause License. This program is distributed in the hope that it
* will be useful, but WITHOUT ANY WARRANTY; WITHOUT EVEN THE IMPLIED
* WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
* See the GNU General Public License version 2 for more details at
* http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html.
*
* You should have received a copy of the GNU General Public License
* along with this program available in the file COPYING in the main
* directory of this source tree.
*
* The BSD 2-Clause License
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __PVRDMA_VERBS_H__
#define __PVRDMA_VERBS_H__
#include "standard-headers/linux/types.h"
union pvrdma_gid {
uint8_t raw[16];
struct {
uint64_t subnet_prefix;
uint64_t interface_id;
} global;
};
enum pvrdma_link_layer {
PVRDMA_LINK_LAYER_UNSPECIFIED,
PVRDMA_LINK_LAYER_INFINIBAND,
PVRDMA_LINK_LAYER_ETHERNET,
};
enum pvrdma_mtu {
PVRDMA_MTU_256 = 1,
PVRDMA_MTU_512 = 2,
PVRDMA_MTU_1024 = 3,
PVRDMA_MTU_2048 = 4,
PVRDMA_MTU_4096 = 5,
};
enum pvrdma_port_state {
PVRDMA_PORT_NOP = 0,
PVRDMA_PORT_DOWN = 1,
PVRDMA_PORT_INIT = 2,
PVRDMA_PORT_ARMED = 3,
PVRDMA_PORT_ACTIVE = 4,
PVRDMA_PORT_ACTIVE_DEFER = 5,
};
enum pvrdma_port_cap_flags {
PVRDMA_PORT_SM = 1 << 1,
PVRDMA_PORT_NOTICE_SUP = 1 << 2,
PVRDMA_PORT_TRAP_SUP = 1 << 3,
PVRDMA_PORT_OPT_IPD_SUP = 1 << 4,
PVRDMA_PORT_AUTO_MIGR_SUP = 1 << 5,
PVRDMA_PORT_SL_MAP_SUP = 1 << 6,
PVRDMA_PORT_MKEY_NVRAM = 1 << 7,
PVRDMA_PORT_PKEY_NVRAM = 1 << 8,
PVRDMA_PORT_LED_INFO_SUP = 1 << 9,
PVRDMA_PORT_SM_DISABLED = 1 << 10,
PVRDMA_PORT_SYS_IMAGE_GUID_SUP = 1 << 11,
PVRDMA_PORT_PKEY_SW_EXT_PORT_TRAP_SUP = 1 << 12,
PVRDMA_PORT_EXTENDED_SPEEDS_SUP = 1 << 14,
PVRDMA_PORT_CM_SUP = 1 << 16,
PVRDMA_PORT_SNMP_TUNNEL_SUP = 1 << 17,
PVRDMA_PORT_REINIT_SUP = 1 << 18,
PVRDMA_PORT_DEVICE_MGMT_SUP = 1 << 19,
PVRDMA_PORT_VENDOR_CLASS_SUP = 1 << 20,
PVRDMA_PORT_DR_NOTICE_SUP = 1 << 21,
PVRDMA_PORT_CAP_MASK_NOTICE_SUP = 1 << 22,
PVRDMA_PORT_BOOT_MGMT_SUP = 1 << 23,
PVRDMA_PORT_LINK_LATENCY_SUP = 1 << 24,
PVRDMA_PORT_CLIENT_REG_SUP = 1 << 25,
PVRDMA_PORT_IP_BASED_GIDS = 1 << 26,
PVRDMA_PORT_CAP_FLAGS_MAX = PVRDMA_PORT_IP_BASED_GIDS,
};
enum pvrdma_port_width {
PVRDMA_WIDTH_1X = 1,
PVRDMA_WIDTH_4X = 2,
PVRDMA_WIDTH_8X = 4,
PVRDMA_WIDTH_12X = 8,
};
enum pvrdma_port_speed {
PVRDMA_SPEED_SDR = 1,
PVRDMA_SPEED_DDR = 2,
PVRDMA_SPEED_QDR = 4,
PVRDMA_SPEED_FDR10 = 8,
PVRDMA_SPEED_FDR = 16,
PVRDMA_SPEED_EDR = 32,
};
struct pvrdma_port_attr {
enum pvrdma_port_state state;
enum pvrdma_mtu max_mtu;
enum pvrdma_mtu active_mtu;
uint32_t gid_tbl_len;
uint32_t port_cap_flags;
uint32_t max_msg_sz;
uint32_t bad_pkey_cntr;
uint32_t qkey_viol_cntr;
uint16_t pkey_tbl_len;
uint16_t lid;
uint16_t sm_lid;
uint8_t lmc;
uint8_t max_vl_num;
uint8_t sm_sl;
uint8_t subnet_timeout;
uint8_t init_type_reply;
uint8_t active_width;
uint8_t active_speed;
uint8_t phys_state;
uint8_t reserved[2];
};
struct pvrdma_global_route {
union pvrdma_gid dgid;
uint32_t flow_label;
uint8_t sgid_index;
uint8_t hop_limit;
uint8_t traffic_class;
uint8_t reserved;
};
struct pvrdma_grh {
uint32_t version_tclass_flow;
uint16_t paylen;
uint8_t next_hdr;
uint8_t hop_limit;
union pvrdma_gid sgid;
union pvrdma_gid dgid;
};
enum pvrdma_ah_flags {
PVRDMA_AH_GRH = 1,
};
enum pvrdma_rate {
PVRDMA_RATE_PORT_CURRENT = 0,
PVRDMA_RATE_2_5_GBPS = 2,
PVRDMA_RATE_5_GBPS = 5,
PVRDMA_RATE_10_GBPS = 3,
PVRDMA_RATE_20_GBPS = 6,
PVRDMA_RATE_30_GBPS = 4,
PVRDMA_RATE_40_GBPS = 7,
PVRDMA_RATE_60_GBPS = 8,
PVRDMA_RATE_80_GBPS = 9,
PVRDMA_RATE_120_GBPS = 10,
PVRDMA_RATE_14_GBPS = 11,
PVRDMA_RATE_56_GBPS = 12,
PVRDMA_RATE_112_GBPS = 13,
PVRDMA_RATE_168_GBPS = 14,
PVRDMA_RATE_25_GBPS = 15,
PVRDMA_RATE_100_GBPS = 16,
PVRDMA_RATE_200_GBPS = 17,
PVRDMA_RATE_300_GBPS = 18,
};
struct pvrdma_ah_attr {
struct pvrdma_global_route grh;
uint16_t dlid;
uint16_t vlan_id;
uint8_t sl;
uint8_t src_path_bits;
uint8_t static_rate;
uint8_t ah_flags;
uint8_t port_num;
uint8_t dmac[6];
uint8_t reserved;
};
enum pvrdma_cq_notify_flags {
PVRDMA_CQ_SOLICITED = 1 << 0,
PVRDMA_CQ_NEXT_COMP = 1 << 1,
PVRDMA_CQ_SOLICITED_MASK = PVRDMA_CQ_SOLICITED |
PVRDMA_CQ_NEXT_COMP,
PVRDMA_CQ_REPORT_MISSED_EVENTS = 1 << 2,
};
struct pvrdma_qp_cap {
uint32_t max_send_wr;
uint32_t max_recv_wr;
uint32_t max_send_sge;
uint32_t max_recv_sge;
uint32_t max_inline_data;
uint32_t reserved;
};
enum pvrdma_sig_type {
PVRDMA_SIGNAL_ALL_WR,
PVRDMA_SIGNAL_REQ_WR,
};
enum pvrdma_qp_type {
PVRDMA_QPT_SMI,
PVRDMA_QPT_GSI,
PVRDMA_QPT_RC,
PVRDMA_QPT_UC,
PVRDMA_QPT_UD,
PVRDMA_QPT_RAW_IPV6,
PVRDMA_QPT_RAW_ETHERTYPE,
PVRDMA_QPT_RAW_PACKET = 8,
PVRDMA_QPT_XRC_INI = 9,
PVRDMA_QPT_XRC_TGT,
PVRDMA_QPT_MAX,
};
enum pvrdma_qp_create_flags {
PVRDMA_QP_CREATE_IPOPVRDMA_UD_LSO = 1 << 0,
PVRDMA_QP_CREATE_BLOCK_MULTICAST_LOOPBACK = 1 << 1,
};
enum pvrdma_qp_attr_mask {
PVRDMA_QP_STATE = 1 << 0,
PVRDMA_QP_CUR_STATE = 1 << 1,
PVRDMA_QP_EN_SQD_ASYNC_NOTIFY = 1 << 2,
PVRDMA_QP_ACCESS_FLAGS = 1 << 3,
PVRDMA_QP_PKEY_INDEX = 1 << 4,
PVRDMA_QP_PORT = 1 << 5,
PVRDMA_QP_QKEY = 1 << 6,
PVRDMA_QP_AV = 1 << 7,
PVRDMA_QP_PATH_MTU = 1 << 8,
PVRDMA_QP_TIMEOUT = 1 << 9,
PVRDMA_QP_RETRY_CNT = 1 << 10,
PVRDMA_QP_RNR_RETRY = 1 << 11,
PVRDMA_QP_RQ_PSN = 1 << 12,
PVRDMA_QP_MAX_QP_RD_ATOMIC = 1 << 13,
PVRDMA_QP_ALT_PATH = 1 << 14,
PVRDMA_QP_MIN_RNR_TIMER = 1 << 15,
PVRDMA_QP_SQ_PSN = 1 << 16,
PVRDMA_QP_MAX_DEST_RD_ATOMIC = 1 << 17,
PVRDMA_QP_PATH_MIG_STATE = 1 << 18,
PVRDMA_QP_CAP = 1 << 19,
PVRDMA_QP_DEST_QPN = 1 << 20,
PVRDMA_QP_ATTR_MASK_MAX = PVRDMA_QP_DEST_QPN,
};
enum pvrdma_qp_state {
PVRDMA_QPS_RESET,
PVRDMA_QPS_INIT,
PVRDMA_QPS_RTR,
PVRDMA_QPS_RTS,
PVRDMA_QPS_SQD,
PVRDMA_QPS_SQE,
PVRDMA_QPS_ERR,
};
enum pvrdma_mig_state {
PVRDMA_MIG_MIGRATED,
PVRDMA_MIG_REARM,
PVRDMA_MIG_ARMED,
};
enum pvrdma_mw_type {
PVRDMA_MW_TYPE_1 = 1,
PVRDMA_MW_TYPE_2 = 2,
};
struct pvrdma_srq_attr {
uint32_t max_wr;
uint32_t max_sge;
uint32_t srq_limit;
uint32_t reserved;
};
struct pvrdma_qp_attr {
enum pvrdma_qp_state qp_state;
enum pvrdma_qp_state cur_qp_state;
enum pvrdma_mtu path_mtu;
enum pvrdma_mig_state path_mig_state;
uint32_t qkey;
uint32_t rq_psn;
uint32_t sq_psn;
uint32_t dest_qp_num;
uint32_t qp_access_flags;
uint16_t pkey_index;
uint16_t alt_pkey_index;
uint8_t en_sqd_async_notify;
uint8_t sq_draining;
uint8_t max_rd_atomic;
uint8_t max_dest_rd_atomic;
uint8_t min_rnr_timer;
uint8_t port_num;
uint8_t timeout;
uint8_t retry_cnt;
uint8_t rnr_retry;
uint8_t alt_port_num;
uint8_t alt_timeout;
uint8_t reserved[5];
struct pvrdma_qp_cap cap;
struct pvrdma_ah_attr ah_attr;
struct pvrdma_ah_attr alt_ah_attr;
};
enum pvrdma_send_flags {
PVRDMA_SEND_FENCE = 1 << 0,
PVRDMA_SEND_SIGNALED = 1 << 1,
PVRDMA_SEND_SOLICITED = 1 << 2,
PVRDMA_SEND_INLINE = 1 << 3,
PVRDMA_SEND_IP_CSUM = 1 << 4,
PVRDMA_SEND_FLAGS_MAX = PVRDMA_SEND_IP_CSUM,
};
enum pvrdma_access_flags {
PVRDMA_ACCESS_LOCAL_WRITE = 1 << 0,
PVRDMA_ACCESS_REMOTE_WRITE = 1 << 1,
PVRDMA_ACCESS_REMOTE_READ = 1 << 2,
PVRDMA_ACCESS_REMOTE_ATOMIC = 1 << 3,
PVRDMA_ACCESS_MW_BIND = 1 << 4,
PVRDMA_ZERO_BASED = 1 << 5,
PVRDMA_ACCESS_ON_DEMAND = 1 << 6,
PVRDMA_ACCESS_FLAGS_MAX = PVRDMA_ACCESS_ON_DEMAND,
};
#endif /* __PVRDMA_VERBS_H__ */

View File

@ -1,310 +0,0 @@
/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */
/*
* Copyright (c) 2012-2016 VMware, Inc. All rights reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of EITHER the GNU General Public License
* version 2 as published by the Free Software Foundation or the BSD
* 2-Clause License. This program is distributed in the hope that it
* will be useful, but WITHOUT ANY WARRANTY; WITHOUT EVEN THE IMPLIED
* WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
* See the GNU General Public License version 2 for more details at
* http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html.
*
* You should have received a copy of the GNU General Public License
* along with this program available in the file COPYING in the main
* directory of this source tree.
*
* The BSD 2-Clause License
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __VMW_PVRDMA_ABI_H__
#define __VMW_PVRDMA_ABI_H__
#include "standard-headers/linux/types.h"
#define PVRDMA_UVERBS_ABI_VERSION 3 /* ABI Version. */
#define PVRDMA_UAR_HANDLE_MASK 0x00FFFFFF /* Bottom 24 bits. */
#define PVRDMA_UAR_QP_OFFSET 0 /* QP doorbell. */
#define PVRDMA_UAR_QP_SEND (1 << 30) /* Send bit. */
#define PVRDMA_UAR_QP_RECV (1 << 31) /* Recv bit. */
#define PVRDMA_UAR_CQ_OFFSET 4 /* CQ doorbell. */
#define PVRDMA_UAR_CQ_ARM_SOL (1 << 29) /* Arm solicited bit. */
#define PVRDMA_UAR_CQ_ARM (1 << 30) /* Arm bit. */
#define PVRDMA_UAR_CQ_POLL (1 << 31) /* Poll bit. */
#define PVRDMA_UAR_SRQ_OFFSET 8 /* SRQ doorbell. */
#define PVRDMA_UAR_SRQ_RECV (1 << 30) /* Recv bit. */
enum pvrdma_wr_opcode {
PVRDMA_WR_RDMA_WRITE,
PVRDMA_WR_RDMA_WRITE_WITH_IMM,
PVRDMA_WR_SEND,
PVRDMA_WR_SEND_WITH_IMM,
PVRDMA_WR_RDMA_READ,
PVRDMA_WR_ATOMIC_CMP_AND_SWP,
PVRDMA_WR_ATOMIC_FETCH_AND_ADD,
PVRDMA_WR_LSO,
PVRDMA_WR_SEND_WITH_INV,
PVRDMA_WR_RDMA_READ_WITH_INV,
PVRDMA_WR_LOCAL_INV,
PVRDMA_WR_FAST_REG_MR,
PVRDMA_WR_MASKED_ATOMIC_CMP_AND_SWP,
PVRDMA_WR_MASKED_ATOMIC_FETCH_AND_ADD,
PVRDMA_WR_BIND_MW,
PVRDMA_WR_REG_SIG_MR,
PVRDMA_WR_ERROR,
};
enum pvrdma_wc_status {
PVRDMA_WC_SUCCESS,
PVRDMA_WC_LOC_LEN_ERR,
PVRDMA_WC_LOC_QP_OP_ERR,
PVRDMA_WC_LOC_EEC_OP_ERR,
PVRDMA_WC_LOC_PROT_ERR,
PVRDMA_WC_WR_FLUSH_ERR,
PVRDMA_WC_MW_BIND_ERR,
PVRDMA_WC_BAD_RESP_ERR,
PVRDMA_WC_LOC_ACCESS_ERR,
PVRDMA_WC_REM_INV_REQ_ERR,
PVRDMA_WC_REM_ACCESS_ERR,
PVRDMA_WC_REM_OP_ERR,
PVRDMA_WC_RETRY_EXC_ERR,
PVRDMA_WC_RNR_RETRY_EXC_ERR,
PVRDMA_WC_LOC_RDD_VIOL_ERR,
PVRDMA_WC_REM_INV_RD_REQ_ERR,
PVRDMA_WC_REM_ABORT_ERR,
PVRDMA_WC_INV_EECN_ERR,
PVRDMA_WC_INV_EEC_STATE_ERR,
PVRDMA_WC_FATAL_ERR,
PVRDMA_WC_RESP_TIMEOUT_ERR,
PVRDMA_WC_GENERAL_ERR,
};
enum pvrdma_wc_opcode {
PVRDMA_WC_SEND,
PVRDMA_WC_RDMA_WRITE,
PVRDMA_WC_RDMA_READ,
PVRDMA_WC_COMP_SWAP,
PVRDMA_WC_FETCH_ADD,
PVRDMA_WC_BIND_MW,
PVRDMA_WC_LSO,
PVRDMA_WC_LOCAL_INV,
PVRDMA_WC_FAST_REG_MR,
PVRDMA_WC_MASKED_COMP_SWAP,
PVRDMA_WC_MASKED_FETCH_ADD,
PVRDMA_WC_RECV = 1 << 7,
PVRDMA_WC_RECV_RDMA_WITH_IMM,
};
enum pvrdma_wc_flags {
PVRDMA_WC_GRH = 1 << 0,
PVRDMA_WC_WITH_IMM = 1 << 1,
PVRDMA_WC_WITH_INVALIDATE = 1 << 2,
PVRDMA_WC_IP_CSUM_OK = 1 << 3,
PVRDMA_WC_WITH_SMAC = 1 << 4,
PVRDMA_WC_WITH_VLAN = 1 << 5,
PVRDMA_WC_WITH_NETWORK_HDR_TYPE = 1 << 6,
PVRDMA_WC_FLAGS_MAX = PVRDMA_WC_WITH_NETWORK_HDR_TYPE,
};
enum pvrdma_network_type {
PVRDMA_NETWORK_IB,
PVRDMA_NETWORK_ROCE_V1 = PVRDMA_NETWORK_IB,
PVRDMA_NETWORK_IPV4,
PVRDMA_NETWORK_IPV6
};
struct pvrdma_alloc_ucontext_resp {
uint32_t qp_tab_size;
uint32_t reserved;
};
struct pvrdma_alloc_pd_resp {
uint32_t pdn;
uint32_t reserved;
};
struct pvrdma_create_cq {
uint64_t __attribute__((aligned(8))) buf_addr;
uint32_t buf_size;
uint32_t reserved;
};
struct pvrdma_create_cq_resp {
uint32_t cqn;
uint32_t reserved;
};
struct pvrdma_resize_cq {
uint64_t __attribute__((aligned(8))) buf_addr;
uint32_t buf_size;
uint32_t reserved;
};
struct pvrdma_create_srq {
uint64_t __attribute__((aligned(8))) buf_addr;
uint32_t buf_size;
uint32_t reserved;
};
struct pvrdma_create_srq_resp {
uint32_t srqn;
uint32_t reserved;
};
struct pvrdma_create_qp {
uint64_t __attribute__((aligned(8))) rbuf_addr;
uint64_t __attribute__((aligned(8))) sbuf_addr;
uint32_t rbuf_size;
uint32_t sbuf_size;
uint64_t __attribute__((aligned(8))) qp_addr;
};
struct pvrdma_create_qp_resp {
uint32_t qpn;
uint32_t qp_handle;
};
/* PVRDMA masked atomic compare and swap */
struct pvrdma_ex_cmp_swap {
uint64_t __attribute__((aligned(8))) swap_val;
uint64_t __attribute__((aligned(8))) compare_val;
uint64_t __attribute__((aligned(8))) swap_mask;
uint64_t __attribute__((aligned(8))) compare_mask;
};
/* PVRDMA masked atomic fetch and add */
struct pvrdma_ex_fetch_add {
uint64_t __attribute__((aligned(8))) add_val;
uint64_t __attribute__((aligned(8))) field_boundary;
};
/* PVRDMA address vector. */
struct pvrdma_av {
uint32_t port_pd;
uint32_t sl_tclass_flowlabel;
uint8_t dgid[16];
uint8_t src_path_bits;
uint8_t gid_index;
uint8_t stat_rate;
uint8_t hop_limit;
uint8_t dmac[6];
uint8_t reserved[6];
};
/* PVRDMA scatter/gather entry */
struct pvrdma_sge {
uint64_t __attribute__((aligned(8))) addr;
uint32_t length;
uint32_t lkey;
};
/* PVRDMA receive queue work request */
struct pvrdma_rq_wqe_hdr {
uint64_t __attribute__((aligned(8))) wr_id; /* wr id */
uint32_t num_sge; /* size of s/g array */
uint32_t total_len; /* reserved */
};
/* Use pvrdma_sge (ib_sge) for receive queue s/g array elements. */
/* PVRDMA send queue work request */
struct pvrdma_sq_wqe_hdr {
uint64_t __attribute__((aligned(8))) wr_id; /* wr id */
uint32_t num_sge; /* size of s/g array */
uint32_t total_len; /* reserved */
uint32_t opcode; /* operation type */
uint32_t send_flags; /* wr flags */
union {
uint32_t imm_data;
uint32_t invalidate_rkey;
} ex;
uint32_t reserved;
union {
struct {
uint64_t __attribute__((aligned(8))) remote_addr;
uint32_t rkey;
uint8_t reserved[4];
} rdma;
struct {
uint64_t __attribute__((aligned(8))) remote_addr;
uint64_t __attribute__((aligned(8))) compare_add;
uint64_t __attribute__((aligned(8))) swap;
uint32_t rkey;
uint32_t reserved;
} atomic;
struct {
uint64_t __attribute__((aligned(8))) remote_addr;
uint32_t log_arg_sz;
uint32_t rkey;
union {
struct pvrdma_ex_cmp_swap cmp_swap;
struct pvrdma_ex_fetch_add fetch_add;
} wr_data;
} masked_atomics;
struct {
uint64_t __attribute__((aligned(8))) iova_start;
uint64_t __attribute__((aligned(8))) pl_pdir_dma;
uint32_t page_shift;
uint32_t page_list_len;
uint32_t length;
uint32_t access_flags;
uint32_t rkey;
uint32_t reserved;
} fast_reg;
struct {
uint32_t remote_qpn;
uint32_t remote_qkey;
struct pvrdma_av av;
} ud;
} wr;
};
/* Use pvrdma_sge (ib_sge) for send queue s/g array elements. */
/* Completion queue element. */
struct pvrdma_cqe {
uint64_t __attribute__((aligned(8))) wr_id;
uint64_t __attribute__((aligned(8))) qp;
uint32_t opcode;
uint32_t status;
uint32_t byte_len;
uint32_t imm_data;
uint32_t src_qp;
uint32_t wc_flags;
uint32_t vendor_err;
uint16_t pkey_index;
uint16_t slid;
uint8_t sl;
uint8_t dlid_path_bits;
uint8_t port_num;
uint8_t smac[6];
uint8_t network_hdr_type;
uint8_t reserved2[6]; /* Pad to next power of 2 (64). */
};
#endif /* __VMW_PVRDMA_ABI_H__ */

View File

@ -18,7 +18,6 @@ enum {
QEMU_ARCH_XTENSA = (1 << 12),
QEMU_ARCH_OPENRISC = (1 << 13),
QEMU_ARCH_TRICORE = (1 << 16),
QEMU_ARCH_NIOS2 = (1 << 17),
QEMU_ARCH_HPPA = (1 << 18),
QEMU_ARCH_RISCV = (1 << 19),
QEMU_ARCH_RX = (1 << 20),

View File

@ -1505,105 +1505,6 @@ static void elf_core_copy_regs(target_elf_gregset_t *regs, const CPUMBState *env
#endif /* TARGET_MICROBLAZE */
#ifdef TARGET_NIOS2
#define elf_check_arch(x) ((x) == EM_ALTERA_NIOS2)
#define ELF_CLASS ELFCLASS32
#define ELF_ARCH EM_ALTERA_NIOS2
static void init_thread(struct target_pt_regs *regs, struct image_info *infop)
{
regs->ea = infop->entry;
regs->sp = infop->start_stack;
}
#define LO_COMMPAGE TARGET_PAGE_SIZE
static bool init_guest_commpage(void)
{
static const uint8_t kuser_page[4 + 2 * 64] = {
/* __kuser_helper_version */
[0x00] = 0x02, 0x00, 0x00, 0x00,
/* __kuser_cmpxchg */
[0x04] = 0x3a, 0x6c, 0x3b, 0x00, /* trap 16 */
0x3a, 0x28, 0x00, 0xf8, /* ret */
/* __kuser_sigtramp */
[0x44] = 0xc4, 0x22, 0x80, 0x00, /* movi r2, __NR_rt_sigreturn */
0x3a, 0x68, 0x3b, 0x00, /* trap 0 */
};
int host_page_size = qemu_real_host_page_size();
void *want, *addr;
want = g2h_untagged(LO_COMMPAGE & -host_page_size);
addr = mmap(want, host_page_size, PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_PRIVATE |
(reserved_va ? MAP_FIXED : MAP_FIXED_NOREPLACE),
-1, 0);
if (addr == MAP_FAILED) {
perror("Allocating guest commpage");
exit(EXIT_FAILURE);
}
if (addr != want) {
return false;
}
memcpy(g2h_untagged(LO_COMMPAGE), kuser_page, sizeof(kuser_page));
if (mprotect(addr, host_page_size, PROT_READ)) {
perror("Protecting guest commpage");
exit(EXIT_FAILURE);
}
page_set_flags(LO_COMMPAGE, LO_COMMPAGE | ~TARGET_PAGE_MASK,
PAGE_READ | PAGE_EXEC | PAGE_VALID);
return true;
}
#define ELF_EXEC_PAGESIZE 4096
#define USE_ELF_CORE_DUMP
#define ELF_NREG 49
typedef target_elf_greg_t target_elf_gregset_t[ELF_NREG];
/* See linux kernel: arch/mips/kernel/process.c:elf_dump_regs. */
static void elf_core_copy_regs(target_elf_gregset_t *regs,
const CPUNios2State *env)
{
int i;
(*regs)[0] = -1;
for (i = 1; i < 8; i++) /* r0-r7 */
(*regs)[i] = tswapreg(env->regs[i + 7]);
for (i = 8; i < 16; i++) /* r8-r15 */
(*regs)[i] = tswapreg(env->regs[i - 8]);
for (i = 16; i < 24; i++) /* r16-r23 */
(*regs)[i] = tswapreg(env->regs[i + 7]);
(*regs)[24] = -1; /* R_ET */
(*regs)[25] = -1; /* R_BT */
(*regs)[26] = tswapreg(env->regs[R_GP]);
(*regs)[27] = tswapreg(env->regs[R_SP]);
(*regs)[28] = tswapreg(env->regs[R_FP]);
(*regs)[29] = tswapreg(env->regs[R_EA]);
(*regs)[30] = -1; /* R_SSTATUS */
(*regs)[31] = tswapreg(env->regs[R_RA]);
(*regs)[32] = tswapreg(env->pc);
(*regs)[33] = -1; /* R_STATUS */
(*regs)[34] = tswapreg(env->regs[CR_ESTATUS]);
for (i = 35; i < 49; i++) /* ... */
(*regs)[i] = -1;
}
#endif /* TARGET_NIOS2 */
#ifdef TARGET_OPENRISC
#define ELF_ARCH EM_OPENRISC

View File

@ -1,157 +0,0 @@
/*
* qemu user cpu loop
*
* Copyright (c) 2003-2008 Fabrice Bellard
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "qemu.h"
#include "user-internals.h"
#include "cpu_loop-common.h"
#include "signal-common.h"
void cpu_loop(CPUNios2State *env)
{
CPUState *cs = env_cpu(env);
int trapnr, ret;
for (;;) {
cpu_exec_start(cs);
trapnr = cpu_exec(cs);
cpu_exec_end(cs);
process_queued_cpu_work(cs);
switch (trapnr) {
case EXCP_INTERRUPT:
/* just indicate that signals should be handled asap */
break;
case EXCP_DIV:
/* Match kernel's handle_diverror_c(). */
env->pc -= 4;
force_sig_fault(TARGET_SIGFPE, TARGET_FPE_INTDIV, env->pc);
break;
case EXCP_UNALIGN:
case EXCP_UNALIGND:
force_sig_fault(TARGET_SIGBUS, TARGET_BUS_ADRALN,
env->ctrl[CR_BADADDR]);
break;
case EXCP_ILLEGAL:
case EXCP_UNIMPL:
/* Match kernel's handle_illegal_c(). */
env->pc -= 4;
force_sig_fault(TARGET_SIGILL, TARGET_ILL_ILLOPC, env->pc);
break;
case EXCP_SUPERI:
/* Match kernel's handle_supervisor_instr(). */
env->pc -= 4;
force_sig_fault(TARGET_SIGILL, TARGET_ILL_PRVOPC, env->pc);
break;
case EXCP_TRAP:
switch (env->error_code) {
case 0:
qemu_log_mask(CPU_LOG_INT, "\nSyscall\n");
ret = do_syscall(env, env->regs[2],
env->regs[4], env->regs[5], env->regs[6],
env->regs[7], env->regs[8], env->regs[9],
0, 0);
if (ret == -QEMU_ESIGRETURN) {
/* rt_sigreturn has set all state. */
break;
}
if (ret == -QEMU_ERESTARTSYS) {
env->pc -= 4;
break;
}
/*
* See the code after translate_rc_and_ret: all negative
* values are errors (aided by userspace restricted to 2G),
* errno is returned positive in r2, and error indication
* is a boolean in r7.
*/
env->regs[2] = abs(ret);
env->regs[7] = ret < 0;
break;
case 1:
qemu_log_mask(CPU_LOG_INT, "\nTrap 1\n");
force_sig_fault(TARGET_SIGUSR1, 0, env->pc);
break;
case 2:
qemu_log_mask(CPU_LOG_INT, "\nTrap 2\n");
force_sig_fault(TARGET_SIGUSR2, 0, env->pc);
break;
case 31:
qemu_log_mask(CPU_LOG_INT, "\nTrap 31\n");
/* Match kernel's breakpoint_c(). */
env->pc -= 4;
force_sig_fault(TARGET_SIGTRAP, TARGET_TRAP_BRKPT, env->pc);
break;
default:
qemu_log_mask(CPU_LOG_INT, "\nTrap %d\n", env->error_code);
force_sig_fault(TARGET_SIGILL, TARGET_ILL_ILLTRP, env->pc);
break;
case 16: /* QEMU specific, for __kuser_cmpxchg */
{
abi_ptr g = env->regs[4];
uint32_t *h, n, o;
if (g & 0x3) {
force_sig_fault(TARGET_SIGBUS, TARGET_BUS_ADRALN, g);
break;
}
ret = page_get_flags(g);
if (!(ret & PAGE_VALID)) {
force_sig_fault(TARGET_SIGSEGV, TARGET_SEGV_MAPERR, g);
break;
}
if (!(ret & PAGE_READ) || !(ret & PAGE_WRITE)) {
force_sig_fault(TARGET_SIGSEGV, TARGET_SEGV_ACCERR, g);
break;
}
h = g2h(cs, g);
o = env->regs[5];
n = env->regs[6];
env->regs[2] = qatomic_cmpxchg(h, o, n) - o;
}
break;
}
break;
case EXCP_DEBUG:
force_sig_fault(TARGET_SIGTRAP, TARGET_TRAP_BRKPT, env->pc);
break;
default:
EXCP_DUMP(env, "\nqemu: unhandled CPU exception %#x - aborting\n",
trapnr);
abort();
}
process_pending_signals(env);
}
}
void target_cpu_copy_regs(CPUArchState *env, struct target_pt_regs *regs)
{
env->regs[R_SP] = regs->sp;
env->pc = regs->ea;
}

View File

@ -1,210 +0,0 @@
/*
* Emulation of Linux signals
*
* Copyright (c) 2003 Fabrice Bellard
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#include "qemu.h"
#include "user-internals.h"
#include "signal-common.h"
#include "linux-user/trace.h"
#define MCONTEXT_VERSION 2
struct target_sigcontext {
int version;
unsigned long gregs[32];
};
struct target_ucontext {
abi_ulong tuc_flags;
abi_ulong tuc_link;
target_stack_t tuc_stack;
struct target_sigcontext tuc_mcontext;
target_sigset_t tuc_sigmask; /* mask last for extensibility */
};
struct target_rt_sigframe {
struct target_siginfo info;
struct target_ucontext uc;
};
static void rt_setup_ucontext(struct target_ucontext *uc, CPUNios2State *env)
{
unsigned long *gregs = uc->tuc_mcontext.gregs;
__put_user(MCONTEXT_VERSION, &uc->tuc_mcontext.version);
__put_user(env->regs[1], &gregs[0]);
__put_user(env->regs[2], &gregs[1]);
__put_user(env->regs[3], &gregs[2]);
__put_user(env->regs[4], &gregs[3]);
__put_user(env->regs[5], &gregs[4]);
__put_user(env->regs[6], &gregs[5]);
__put_user(env->regs[7], &gregs[6]);
__put_user(env->regs[8], &gregs[7]);
__put_user(env->regs[9], &gregs[8]);
__put_user(env->regs[10], &gregs[9]);
__put_user(env->regs[11], &gregs[10]);
__put_user(env->regs[12], &gregs[11]);
__put_user(env->regs[13], &gregs[12]);
__put_user(env->regs[14], &gregs[13]);
__put_user(env->regs[15], &gregs[14]);
__put_user(env->regs[16], &gregs[15]);
__put_user(env->regs[17], &gregs[16]);
__put_user(env->regs[18], &gregs[17]);
__put_user(env->regs[19], &gregs[18]);
__put_user(env->regs[20], &gregs[19]);
__put_user(env->regs[21], &gregs[20]);
__put_user(env->regs[22], &gregs[21]);
__put_user(env->regs[23], &gregs[22]);
__put_user(env->regs[R_RA], &gregs[23]);
__put_user(env->regs[R_FP], &gregs[24]);
__put_user(env->regs[R_GP], &gregs[25]);
__put_user(env->pc, &gregs[27]);
__put_user(env->regs[R_SP], &gregs[28]);
}
static int rt_restore_ucontext(CPUNios2State *env, struct target_ucontext *uc)
{
int temp;
unsigned long *gregs = uc->tuc_mcontext.gregs;
/* Always make any pending restarted system calls return -EINTR */
/* current->restart_block.fn = do_no_restart_syscall; */
__get_user(temp, &uc->tuc_mcontext.version);
if (temp != MCONTEXT_VERSION) {
return 1;
}
/* restore passed registers */
__get_user(env->regs[1], &gregs[0]);
__get_user(env->regs[2], &gregs[1]);
__get_user(env->regs[3], &gregs[2]);
__get_user(env->regs[4], &gregs[3]);
__get_user(env->regs[5], &gregs[4]);
__get_user(env->regs[6], &gregs[5]);
__get_user(env->regs[7], &gregs[6]);
__get_user(env->regs[8], &gregs[7]);
__get_user(env->regs[9], &gregs[8]);
__get_user(env->regs[10], &gregs[9]);
__get_user(env->regs[11], &gregs[10]);
__get_user(env->regs[12], &gregs[11]);
__get_user(env->regs[13], &gregs[12]);
__get_user(env->regs[14], &gregs[13]);
__get_user(env->regs[15], &gregs[14]);
__get_user(env->regs[16], &gregs[15]);
__get_user(env->regs[17], &gregs[16]);
__get_user(env->regs[18], &gregs[17]);
__get_user(env->regs[19], &gregs[18]);
__get_user(env->regs[20], &gregs[19]);
__get_user(env->regs[21], &gregs[20]);
__get_user(env->regs[22], &gregs[21]);
__get_user(env->regs[23], &gregs[22]);
/* gregs[23] is handled below */
/* Verify, should this be settable */
__get_user(env->regs[R_FP], &gregs[24]);
/* Verify, should this be settable */
__get_user(env->regs[R_GP], &gregs[25]);
/* Not really necessary no user settable bits */
__get_user(temp, &gregs[26]);
__get_user(env->pc, &gregs[27]);
__get_user(env->regs[R_RA], &gregs[23]);
__get_user(env->regs[R_SP], &gregs[28]);
target_restore_altstack(&uc->tuc_stack, env);
return 0;
}
static abi_ptr get_sigframe(struct target_sigaction *ka, CPUNios2State *env,
size_t frame_size)
{
unsigned long usp;
/* This is the X/Open sanctioned signal stack switching. */
usp = target_sigsp(get_sp_from_cpustate(env), ka);
/* Verify, is it 32 or 64 bit aligned */
return (usp - frame_size) & -8;
}
void setup_rt_frame(int sig, struct target_sigaction *ka,
target_siginfo_t *info,
target_sigset_t *set,
CPUNios2State *env)
{
struct target_rt_sigframe *frame;
abi_ptr frame_addr;
int i;
frame_addr = get_sigframe(ka, env, sizeof(*frame));
if (!lock_user_struct(VERIFY_WRITE, frame, frame_addr, 0)) {
force_sigsegv(sig);
return;
}
frame->info = *info;
/* Create the ucontext. */
__put_user(0, &frame->uc.tuc_flags);
__put_user(0, &frame->uc.tuc_link);
target_save_altstack(&frame->uc.tuc_stack, env);
rt_setup_ucontext(&frame->uc, env);
for (i = 0; i < TARGET_NSIG_WORDS; i++) {
__put_user(set->sig[i], &frame->uc.tuc_sigmask.sig[i]);
}
/* Set up to return from userspace; jump to fixed address sigreturn
trampoline on kuser page. */
env->regs[R_RA] = (unsigned long) (0x1044);
/* Set up registers for signal handler */
env->regs[R_SP] = frame_addr;
env->regs[4] = sig;
env->regs[5] = frame_addr + offsetof(struct target_rt_sigframe, info);
env->regs[6] = frame_addr + offsetof(struct target_rt_sigframe, uc);
env->pc = ka->_sa_handler;
unlock_user_struct(frame, frame_addr, 1);
}
long do_rt_sigreturn(CPUNios2State *env)
{
/* Verify, can we follow the stack back */
abi_ulong frame_addr = env->regs[R_SP];
struct target_rt_sigframe *frame;
sigset_t set;
if (!lock_user_struct(VERIFY_READ, frame, frame_addr, 1)) {
goto badframe;
}
target_to_host_sigset(&set, &frame->uc.tuc_sigmask);
set_sigmask(&set);
if (rt_restore_ucontext(env, &frame->uc)) {
goto badframe;
}
unlock_user_struct(frame, frame_addr, 0);
return -QEMU_ESIGRETURN;
badframe:
unlock_user_struct(frame, frame_addr, 0);
force_sig(TARGET_SIGSEGV);
return -QEMU_ESIGRETURN;
}

View File

@ -1 +0,0 @@
#include "../generic/sockbits.h"

View File

@ -1,333 +0,0 @@
/*
* This file contains the system call numbers.
* Do not modify.
* This file is generated by scripts/gensyscalls.sh
*/
#ifndef LINUX_USER_NIOS2_SYSCALL_NR_H
#define LINUX_USER_NIOS2_SYSCALL_NR_H
#define TARGET_NR_cacheflush (TARGET_NR_arch_specific_syscall)
#define TARGET_NR_io_setup 0
#define TARGET_NR_io_destroy 1
#define TARGET_NR_io_submit 2
#define TARGET_NR_io_cancel 3
#define TARGET_NR_io_getevents 4
#define TARGET_NR_setxattr 5
#define TARGET_NR_lsetxattr 6
#define TARGET_NR_fsetxattr 7
#define TARGET_NR_getxattr 8
#define TARGET_NR_lgetxattr 9
#define TARGET_NR_fgetxattr 10
#define TARGET_NR_listxattr 11
#define TARGET_NR_llistxattr 12
#define TARGET_NR_flistxattr 13
#define TARGET_NR_removexattr 14
#define TARGET_NR_lremovexattr 15
#define TARGET_NR_fremovexattr 16
#define TARGET_NR_getcwd 17
#define TARGET_NR_lookup_dcookie 18
#define TARGET_NR_eventfd2 19
#define TARGET_NR_epoll_create1 20
#define TARGET_NR_epoll_ctl 21
#define TARGET_NR_epoll_pwait 22
#define TARGET_NR_dup 23
#define TARGET_NR_dup3 24
#define TARGET_NR_fcntl64 25
#define TARGET_NR_inotify_init1 26
#define TARGET_NR_inotify_add_watch 27
#define TARGET_NR_inotify_rm_watch 28
#define TARGET_NR_ioctl 29
#define TARGET_NR_ioprio_set 30
#define TARGET_NR_ioprio_get 31
#define TARGET_NR_flock 32
#define TARGET_NR_mknodat 33
#define TARGET_NR_mkdirat 34
#define TARGET_NR_unlinkat 35
#define TARGET_NR_symlinkat 36
#define TARGET_NR_linkat 37
#define TARGET_NR_renameat 38
#define TARGET_NR_umount2 39
#define TARGET_NR_mount 40
#define TARGET_NR_pivot_root 41
#define TARGET_NR_nfsservctl 42
#define TARGET_NR_statfs64 43
#define TARGET_NR_fstatfs64 44
#define TARGET_NR_truncate64 45
#define TARGET_NR_ftruncate64 46
#define TARGET_NR_fallocate 47
#define TARGET_NR_faccessat 48
#define TARGET_NR_chdir 49
#define TARGET_NR_fchdir 50
#define TARGET_NR_chroot 51
#define TARGET_NR_fchmod 52
#define TARGET_NR_fchmodat 53
#define TARGET_NR_fchownat 54
#define TARGET_NR_fchown 55
#define TARGET_NR_openat 56
#define TARGET_NR_close 57
#define TARGET_NR_vhangup 58
#define TARGET_NR_pipe2 59
#define TARGET_NR_quotactl 60
#define TARGET_NR_getdents64 61
#define TARGET_NR_llseek 62
#define TARGET_NR_read 63
#define TARGET_NR_write 64
#define TARGET_NR_readv 65
#define TARGET_NR_writev 66
#define TARGET_NR_pread64 67
#define TARGET_NR_pwrite64 68
#define TARGET_NR_preadv 69
#define TARGET_NR_pwritev 70
#define TARGET_NR_sendfile64 71
#define TARGET_NR_pselect6 72
#define TARGET_NR_ppoll 73
#define TARGET_NR_signalfd4 74
#define TARGET_NR_vmsplice 75
#define TARGET_NR_splice 76
#define TARGET_NR_tee 77
#define TARGET_NR_readlinkat 78
#define TARGET_NR_fstatat64 79
#define TARGET_NR_fstat64 80
#define TARGET_NR_sync 81
#define TARGET_NR_fsync 82
#define TARGET_NR_fdatasync 83
#define TARGET_NR_sync_file_range 84
#define TARGET_NR_timerfd_create 85
#define TARGET_NR_timerfd_settime 86
#define TARGET_NR_timerfd_gettime 87
#define TARGET_NR_utimensat 88
#define TARGET_NR_acct 89
#define TARGET_NR_capget 90
#define TARGET_NR_capset 91
#define TARGET_NR_personality 92
#define TARGET_NR_exit 93
#define TARGET_NR_exit_group 94
#define TARGET_NR_waitid 95
#define TARGET_NR_set_tid_address 96
#define TARGET_NR_unshare 97
#define TARGET_NR_futex 98
#define TARGET_NR_set_robust_list 99
#define TARGET_NR_get_robust_list 100
#define TARGET_NR_nanosleep 101
#define TARGET_NR_getitimer 102
#define TARGET_NR_setitimer 103
#define TARGET_NR_kexec_load 104
#define TARGET_NR_init_module 105
#define TARGET_NR_delete_module 106
#define TARGET_NR_timer_create 107
#define TARGET_NR_timer_gettime 108
#define TARGET_NR_timer_getoverrun 109
#define TARGET_NR_timer_settime 110
#define TARGET_NR_timer_delete 111
#define TARGET_NR_clock_settime 112
#define TARGET_NR_clock_gettime 113
#define TARGET_NR_clock_getres 114
#define TARGET_NR_clock_nanosleep 115
#define TARGET_NR_syslog 116
#define TARGET_NR_ptrace 117
#define TARGET_NR_sched_setparam 118
#define TARGET_NR_sched_setscheduler 119
#define TARGET_NR_sched_getscheduler 120
#define TARGET_NR_sched_getparam 121
#define TARGET_NR_sched_setaffinity 122
#define TARGET_NR_sched_getaffinity 123
#define TARGET_NR_sched_yield 124
#define TARGET_NR_sched_get_priority_max 125
#define TARGET_NR_sched_get_priority_min 126
#define TARGET_NR_sched_rr_get_interval 127
#define TARGET_NR_restart_syscall 128
#define TARGET_NR_kill 129
#define TARGET_NR_tkill 130
#define TARGET_NR_tgkill 131
#define TARGET_NR_sigaltstack 132
#define TARGET_NR_rt_sigsuspend 133
#define TARGET_NR_rt_sigaction 134
#define TARGET_NR_rt_sigprocmask 135
#define TARGET_NR_rt_sigpending 136
#define TARGET_NR_rt_sigtimedwait 137
#define TARGET_NR_rt_sigqueueinfo 138
#define TARGET_NR_rt_sigreturn 139
#define TARGET_NR_setpriority 140
#define TARGET_NR_getpriority 141
#define TARGET_NR_reboot 142
#define TARGET_NR_setregid 143
#define TARGET_NR_setgid 144
#define TARGET_NR_setreuid 145
#define TARGET_NR_setuid 146
#define TARGET_NR_setresuid 147
#define TARGET_NR_getresuid 148
#define TARGET_NR_setresgid 149
#define TARGET_NR_getresgid 150
#define TARGET_NR_setfsuid 151
#define TARGET_NR_setfsgid 152
#define TARGET_NR_times 153
#define TARGET_NR_setpgid 154
#define TARGET_NR_getpgid 155
#define TARGET_NR_getsid 156
#define TARGET_NR_setsid 157
#define TARGET_NR_getgroups 158
#define TARGET_NR_setgroups 159
#define TARGET_NR_uname 160
#define TARGET_NR_sethostname 161
#define TARGET_NR_setdomainname 162
#define TARGET_NR_getrlimit 163
#define TARGET_NR_setrlimit 164
#define TARGET_NR_getrusage 165
#define TARGET_NR_umask 166
#define TARGET_NR_prctl 167
#define TARGET_NR_getcpu 168
#define TARGET_NR_gettimeofday 169
#define TARGET_NR_settimeofday 170
#define TARGET_NR_adjtimex 171
#define TARGET_NR_getpid 172
#define TARGET_NR_getppid 173
#define TARGET_NR_getuid 174
#define TARGET_NR_geteuid 175
#define TARGET_NR_getgid 176
#define TARGET_NR_getegid 177
#define TARGET_NR_gettid 178
#define TARGET_NR_sysinfo 179
#define TARGET_NR_mq_open 180
#define TARGET_NR_mq_unlink 181
#define TARGET_NR_mq_timedsend 182
#define TARGET_NR_mq_timedreceive 183
#define TARGET_NR_mq_notify 184
#define TARGET_NR_mq_getsetattr 185
#define TARGET_NR_msgget 186
#define TARGET_NR_msgctl 187
#define TARGET_NR_msgrcv 188
#define TARGET_NR_msgsnd 189
#define TARGET_NR_semget 190
#define TARGET_NR_semctl 191
#define TARGET_NR_semtimedop 192
#define TARGET_NR_semop 193
#define TARGET_NR_shmget 194
#define TARGET_NR_shmctl 195
#define TARGET_NR_shmat 196
#define TARGET_NR_shmdt 197
#define TARGET_NR_socket 198
#define TARGET_NR_socketpair 199
#define TARGET_NR_bind 200
#define TARGET_NR_listen 201
#define TARGET_NR_accept 202
#define TARGET_NR_connect 203
#define TARGET_NR_getsockname 204
#define TARGET_NR_getpeername 205
#define TARGET_NR_sendto 206
#define TARGET_NR_recvfrom 207
#define TARGET_NR_setsockopt 208
#define TARGET_NR_getsockopt 209
#define TARGET_NR_shutdown 210
#define TARGET_NR_sendmsg 211
#define TARGET_NR_recvmsg 212
#define TARGET_NR_readahead 213
#define TARGET_NR_brk 214
#define TARGET_NR_munmap 215
#define TARGET_NR_mremap 216
#define TARGET_NR_add_key 217
#define TARGET_NR_request_key 218
#define TARGET_NR_keyctl 219
#define TARGET_NR_clone 220
#define TARGET_NR_execve 221
#define TARGET_NR_mmap2 222
#define TARGET_NR_fadvise64_64 223
#define TARGET_NR_swapon 224
#define TARGET_NR_swapoff 225
#define TARGET_NR_mprotect 226
#define TARGET_NR_msync 227
#define TARGET_NR_mlock 228
#define TARGET_NR_munlock 229
#define TARGET_NR_mlockall 230
#define TARGET_NR_munlockall 231
#define TARGET_NR_mincore 232
#define TARGET_NR_madvise 233
#define TARGET_NR_remap_file_pages 234
#define TARGET_NR_mbind 235
#define TARGET_NR_get_mempolicy 236
#define TARGET_NR_set_mempolicy 237
#define TARGET_NR_migrate_pages 238
#define TARGET_NR_move_pages 239
#define TARGET_NR_rt_tgsigqueueinfo 240
#define TARGET_NR_perf_event_open 241
#define TARGET_NR_accept4 242
#define TARGET_NR_recvmmsg 243
#define TARGET_NR_arch_specific_syscall 244
#define TARGET_NR_wait4 260
#define TARGET_NR_prlimit64 261
#define TARGET_NR_fanotify_init 262
#define TARGET_NR_fanotify_mark 263
#define TARGET_NR_name_to_handle_at 264
#define TARGET_NR_open_by_handle_at 265
#define TARGET_NR_clock_adjtime 266
#define TARGET_NR_syncfs 267
#define TARGET_NR_setns 268
#define TARGET_NR_sendmmsg 269
#define TARGET_NR_process_vm_readv 270
#define TARGET_NR_process_vm_writev 271
#define TARGET_NR_kcmp 272
#define TARGET_NR_finit_module 273
#define TARGET_NR_sched_setattr 274
#define TARGET_NR_sched_getattr 275
#define TARGET_NR_renameat2 276
#define TARGET_NR_seccomp 277
#define TARGET_NR_getrandom 278
#define TARGET_NR_memfd_create 279
#define TARGET_NR_bpf 280
#define TARGET_NR_execveat 281
#define TARGET_NR_userfaultfd 282
#define TARGET_NR_membarrier 283
#define TARGET_NR_mlock2 284
#define TARGET_NR_copy_file_range 285
#define TARGET_NR_preadv2 286
#define TARGET_NR_pwritev2 287
#define TARGET_NR_pkey_mprotect 288
#define TARGET_NR_pkey_alloc 289
#define TARGET_NR_pkey_free 290
#define TARGET_NR_statx 291
#define TARGET_NR_io_pgetevents 292
#define TARGET_NR_rseq 293
#define TARGET_NR_kexec_file_load 294
#define TARGET_NR_clock_gettime64 403
#define TARGET_NR_clock_settime64 404
#define TARGET_NR_clock_adjtime64 405
#define TARGET_NR_clock_getres_time64 406
#define TARGET_NR_clock_nanosleep_time64 407
#define TARGET_NR_timer_gettime64 408
#define TARGET_NR_timer_settime64 409
#define TARGET_NR_timerfd_gettime64 410
#define TARGET_NR_timerfd_settime64 411
#define TARGET_NR_utimensat_time64 412
#define TARGET_NR_pselect6_time64 413
#define TARGET_NR_ppoll_time64 414
#define TARGET_NR_io_pgetevents_time64 416
#define TARGET_NR_recvmmsg_time64 417
#define TARGET_NR_mq_timedsend_time64 418
#define TARGET_NR_mq_timedreceive_time64 419
#define TARGET_NR_semtimedop_time64 420
#define TARGET_NR_rt_sigtimedwait_time64 421
#define TARGET_NR_futex_time64 422
#define TARGET_NR_sched_rr_get_interval_time64 423
#define TARGET_NR_pidfd_send_signal 424
#define TARGET_NR_io_uring_setup 425
#define TARGET_NR_io_uring_enter 426
#define TARGET_NR_io_uring_register 427
#define TARGET_NR_open_tree 428
#define TARGET_NR_move_mount 429
#define TARGET_NR_fsopen 430
#define TARGET_NR_fsconfig 431
#define TARGET_NR_fsmount 432
#define TARGET_NR_fspick 433
#define TARGET_NR_pidfd_open 434
#define TARGET_NR_close_range 436
#define TARGET_NR_openat2 437
#define TARGET_NR_pidfd_getfd 438
#define TARGET_NR_faccessat2 439
#define TARGET_NR_process_madvise 440
#define TARGET_NR_epoll_pwait2 441
#define TARGET_NR_mount_setattr 442
#define TARGET_NR_landlock_create_ruleset 444
#define TARGET_NR_landlock_add_rule 445
#define TARGET_NR_landlock_restrict_self 446
#define TARGET_NR_syscalls 447
#endif /* LINUX_USER_NIOS2_SYSCALL_NR_H */

View File

@ -1,49 +0,0 @@
/*
* Nios2 specific CPU ABI and functions for linux-user
*
* Copyright (c) 2016 Marek Vasut <marex@denx.de>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#ifndef NIOS2_TARGET_CPU_H
#define NIOS2_TARGET_CPU_H
static inline void cpu_clone_regs_child(CPUNios2State *env, target_ulong newsp,
unsigned flags)
{
if (newsp) {
env->regs[R_SP] = newsp;
}
env->regs[R_RET0] = 0;
env->regs[7] = 0;
}
static inline void cpu_clone_regs_parent(CPUNios2State *env, unsigned flags)
{
}
static inline void cpu_set_tls(CPUNios2State *env, target_ulong newtls)
{
/*
* Linux kernel 3.10 does not pay any attention to CLONE_SETTLS
* in copy_thread(), so QEMU need not do so either.
*/
}
static inline abi_ulong get_sp_from_cpustate(CPUNios2State *state)
{
return state->regs[R_SP];
}
#endif

View File

@ -1,14 +0,0 @@
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation, or (at your option) any
* later version. See the COPYING file in the top-level directory.
*/
#ifndef NIOS2_TARGET_ELF_H
#define NIOS2_TARGET_ELF_H
static inline const char *cpu_get_model(uint32_t eflags)
{
return "any";
}
#endif

View File

@ -1,7 +0,0 @@
#ifndef NIOS2_TARGET_ERRNO_DEFS_H
#define NIOS2_TARGET_ERRNO_DEFS_H
/* Target uses generic errno */
#include "../generic/target_errno_defs.h"
#endif

View File

@ -1,11 +0,0 @@
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation, or (at your option) any
* later version. See the COPYING file in the top-level directory.
*/
#ifndef NIOS2_TARGET_FCNTL_H
#define NIOS2_TARGET_FCNTL_H
#include "../generic/fcntl.h"
#endif

View File

@ -1,11 +0,0 @@
/*
* arch/nios2/include/asm/processor.h:
* TASK_UNMAPPED_BASE PAGE_ALIGN(TASK_SIZE / 3)
* TASK_SIZE 0x7FFF0000UL
*/
#define TASK_UNMAPPED_BASE TARGET_PAGE_ALIGN(0x7FFF0000 / 3)
/* arch/nios2/include/asm/elf.h */
#define ELF_ET_DYN_BASE 0xD0000000
#include "../generic/target_mman.h"

View File

@ -1 +0,0 @@
/* No special prctl support required. */

View File

@ -1 +0,0 @@
/* No target-specific /proc support */

View File

@ -1 +0,0 @@
#include "../generic/target_resource.h"

View File

@ -1,9 +0,0 @@
#ifndef NIOS2_TARGET_SIGNAL_H
#define NIOS2_TARGET_SIGNAL_H
#include "../generic/signal.h"
/* Nios2 uses a fixed address on the kuser page for sigreturn. */
#define TARGET_ARCH_HAS_SIGTRAMP_PAGE 0
#endif /* NIOS2_TARGET_SIGNAL_H */

View File

@ -1 +0,0 @@
#include "../generic/target_structs.h"

View File

@ -1,37 +0,0 @@
#ifndef NIOS2_TARGET_SYSCALL_H
#define NIOS2_TARGET_SYSCALL_H
#define UNAME_MACHINE "nios2"
#define UNAME_MINIMUM_RELEASE "3.19.0"
struct target_pt_regs {
unsigned long r8; /* r8-r15 Caller-saved GP registers */
unsigned long r9;
unsigned long r10;
unsigned long r11;
unsigned long r12;
unsigned long r13;
unsigned long r14;
unsigned long r15;
unsigned long r1; /* Assembler temporary */
unsigned long r2; /* Retval LS 32bits */
unsigned long r3; /* Retval MS 32bits */
unsigned long r4; /* r4-r7 Register arguments */
unsigned long r5;
unsigned long r6;
unsigned long r7;
unsigned long orig_r2; /* Copy of r2 ?? */
unsigned long ra; /* Return address */
unsigned long fp; /* Frame pointer */
unsigned long sp; /* Stack pointer */
unsigned long gp; /* Global pointer */
unsigned long estatus;
unsigned long ea; /* Exception return address (pc) */
unsigned long orig_r7;
};
#define TARGET_MCL_CURRENT 1
#define TARGET_MCL_FUTURE 2
#define TARGET_MCL_ONFAULT 4
#endif /* NIOS2_TARGET_SYSCALL_H */

View File

@ -1 +0,0 @@
#include "../generic/termbits.h"

View File

@ -73,7 +73,7 @@
#if defined(TARGET_I386) || defined(TARGET_ARM) || defined(TARGET_SH4) \
|| defined(TARGET_M68K) || defined(TARGET_CRIS) \
|| defined(TARGET_S390X) || defined(TARGET_OPENRISC) \
|| defined(TARGET_NIOS2) || defined(TARGET_RISCV) \
|| defined(TARGET_RISCV) \
|| defined(TARGET_XTENSA) || defined(TARGET_LOONGARCH64)
#define TARGET_IOC_SIZEBITS 14
@ -1974,7 +1974,7 @@ struct target_stat64 {
abi_ulong __unused5;
};
#elif defined(TARGET_OPENRISC) || defined(TARGET_NIOS2) \
#elif defined(TARGET_OPENRISC) \
|| defined(TARGET_RISCV) || defined(TARGET_HEXAGON)
/* These are the asm-generic versions of the stat and stat64 structures */

View File

@ -2833,37 +2833,6 @@ config_host_data.set('CONFIG_ARM_AES_BUILTIN', cc.compiles('''
void foo(uint8x16_t *p) { *p = vaesmcq_u8(*p); }
'''))
have_pvrdma = get_option('pvrdma') \
.require(rdma.found(), error_message: 'PVRDMA requires OpenFabrics libraries') \
.require(cc.compiles(gnu_source_prefix + '''
#include <sys/mman.h>
int main(void)
{
char buf = 0;
void *addr = &buf;
addr = mremap(addr, 0, 1, MREMAP_MAYMOVE | MREMAP_FIXED);
return 0;
}'''), error_message: 'PVRDMA requires mremap').allowed()
if have_pvrdma
config_host_data.set('LEGACY_RDMA_REG_MR', not cc.links('''
#include <infiniband/verbs.h>
int main(void)
{
struct ibv_mr *mr;
struct ibv_pd *pd = NULL;
size_t length = 10;
uint64_t iova = 0;
int access = 0;
void *addr = NULL;
mr = ibv_reg_mr_iova(pd, addr, length, iova, access);
ibv_dereg_mr(mr);
return 0;
}'''))
endif
if get_option('membarrier').disabled()
have_membarrier = false
elif host_os == 'windows'
@ -2971,7 +2940,6 @@ disassemblers = {
'm68k' : ['CONFIG_M68K_DIS'],
'microblaze' : ['CONFIG_MICROBLAZE_DIS'],
'mips' : ['CONFIG_MIPS_DIS'],
'nios2' : ['CONFIG_NIOS2_DIS'],
'or1k' : ['CONFIG_OPENRISC_DIS'],
'ppc' : ['CONFIG_PPC_DIS'],
'riscv' : ['CONFIG_RISCV_DIS'],
@ -2997,7 +2965,6 @@ host_kconfig = \
(have_vhost_kernel ? ['CONFIG_VHOST_KERNEL=y'] : []) + \
(have_virtfs ? ['CONFIG_VIRTFS=y'] : []) + \
(host_os == 'linux' ? ['CONFIG_LINUX=y'] : []) + \
(have_pvrdma ? ['CONFIG_PVRDMA=y'] : []) + \
(multiprocess_allowed ? ['CONFIG_MULTIPROCESS_ALLOWED=y'] : []) + \
(vfio_user_server_allowed ? ['CONFIG_VFIO_USER_SERVER_ALLOWED=y'] : []) + \
(hv_balloon ? ['CONFIG_HV_BALLOON_POSSIBLE=y'] : [])
@ -3361,8 +3328,6 @@ if have_system
'hw/pci',
'hw/pci-host',
'hw/ppc',
'hw/rdma',
'hw/rdma/vmw',
'hw/rtc',
'hw/s390x',
'hw/scsi',
@ -3398,7 +3363,6 @@ if have_system or have_user
'target/i386/kvm',
'target/loongarch',
'target/mips/tcg',
'target/nios2',
'target/ppc',
'target/riscv',
'target/s390x',
@ -4036,7 +4000,6 @@ if have_tools
}]
endforeach
subdir('contrib/rdmacm-mux')
subdir('contrib/elf2dmp')
executable('qemu-edid', files('qemu-edid.c', 'hw/display/edid-generate.c'),
@ -4442,7 +4405,6 @@ summary_info += {'Linux AIO support': libaio}
summary_info += {'Linux io_uring support': linux_io_uring}
summary_info += {'ATTR/XATTR support': libattr}
summary_info += {'RDMA support': rdma}
summary_info += {'PVRDMA support': have_pvrdma}
summary_info += {'fdt support': fdt_opt == 'disabled' ? false : fdt_opt}
summary_info += {'libcap-ng support': libcap_ng}
summary_info += {'bpf support': libbpf}

View File

@ -198,8 +198,6 @@ option('opengl', type : 'feature', value : 'auto',
description: 'OpenGL support')
option('rdma', type : 'feature', value : 'auto',
description: 'Enable RDMA-based migration')
option('pvrdma', type : 'feature', value : 'auto',
description: 'Enable PVRDMA support')
option('gtk', type : 'feature', value : 'auto',
description: 'GTK+ user interface')
option('sdl', type : 'feature', value : 'auto',

View File

@ -31,7 +31,6 @@
#include "qapi/type-helpers.h"
#include "hw/mem/memory-device.h"
#include "hw/intc/intc.h"
#include "hw/rdma/rdma.h"
NameInfo *qmp_query_name(Error **errp)
{

View File

@ -33,7 +33,7 @@
{ 'enum' : 'SysEmuTarget',
'data' : [ 'aarch64', 'alpha', 'arm', 'avr', 'cris', 'hppa', 'i386',
'loongarch64', 'm68k', 'microblaze', 'microblazeel', 'mips', 'mips64',
'mips64el', 'mipsel', 'nios2', 'or1k', 'ppc',
'mips64el', 'mipsel', 'or1k', 'ppc',
'ppc64', 'riscv32', 'riscv64', 'rx', 's390x', 'sh4',
'sh4eb', 'sparc', 'sparc64', 'tricore',
'x86_64', 'xtensa', 'xtensaeb' ] }
@ -1737,23 +1737,6 @@
'returns': 'HumanReadableText',
'features': [ 'unstable' ] }
##
# @x-query-rdma:
#
# Query RDMA state
#
# Features:
#
# @unstable: This command is meant for debugging.
#
# Returns: RDMA state
#
# Since: 6.2
##
{ 'command': 'x-query-rdma',
'returns': 'HumanReadableText',
'features': [ 'unstable' ] }
##
# @x-query-roms:
#

View File

@ -62,7 +62,6 @@ if have_system
'cryptodev',
'qdev',
'pci',
'rdma',
'rocker',
'tpm',
]

View File

@ -54,7 +54,6 @@
{ 'include': 'dump.json' }
{ 'include': 'net.json' }
{ 'include': 'ebpf.json' }
{ 'include': 'rdma.json' }
{ 'include': 'rocker.json' }
{ 'include': 'tpm.json' }
{ 'include': 'ui.json' }

View File

@ -1,38 +0,0 @@
# -*- Mode: Python -*-
# vim: filetype=python
#
##
# = RDMA device
##
##
# @RDMA_GID_STATUS_CHANGED:
#
# Emitted when guest driver adds/deletes GID to/from device
#
# @netdev: RoCE Network Device name
#
# @gid-status: Add or delete indication
#
# @subnet-prefix: Subnet Prefix
#
# @interface-id: Interface ID
#
# Since: 4.0
#
# Example:
#
# <- {"timestamp": {"seconds": 1541579657, "microseconds": 986760},
# "event": "RDMA_GID_STATUS_CHANGED",
# "data":
# {"netdev": "bridge0",
# "interface-id": 15880512517475447892,
# "gid-status": true,
# "subnet-prefix": 33022}}
##
{ 'event': 'RDMA_GID_STATUS_CHANGED',
'data': { 'netdev' : 'str',
'gid-status' : 'bool',
'subnet-prefix' : 'uint64',
'interface-id' : 'uint64' } }

View File

@ -4849,10 +4849,10 @@ ERST
DEF("semihosting", 0, QEMU_OPTION_semihosting,
"-semihosting semihosting mode\n",
QEMU_ARCH_ARM | QEMU_ARCH_M68K | QEMU_ARCH_XTENSA |
QEMU_ARCH_MIPS | QEMU_ARCH_NIOS2 | QEMU_ARCH_RISCV)
QEMU_ARCH_MIPS | QEMU_ARCH_RISCV)
SRST
``-semihosting``
Enable :ref:`Semihosting` mode (ARM, M68K, Xtensa, MIPS, Nios II, RISC-V only).
Enable :ref:`Semihosting` mode (ARM, M68K, Xtensa, MIPS, RISC-V only).
.. warning::
Note that this allows guest direct access to the host filesystem, so
@ -4865,10 +4865,10 @@ DEF("semihosting-config", HAS_ARG, QEMU_OPTION_semihosting_config,
"-semihosting-config [enable=on|off][,target=native|gdb|auto][,chardev=id][,userspace=on|off][,arg=str[,...]]\n" \
" semihosting configuration\n",
QEMU_ARCH_ARM | QEMU_ARCH_M68K | QEMU_ARCH_XTENSA |
QEMU_ARCH_MIPS | QEMU_ARCH_NIOS2 | QEMU_ARCH_RISCV)
QEMU_ARCH_MIPS | QEMU_ARCH_RISCV)
SRST
``-semihosting-config [enable=on|off][,target=native|gdb|auto][,chardev=id][,userspace=on|off][,arg=str[,...]]``
Enable and configure :ref:`Semihosting` (ARM, M68K, Xtensa, MIPS, Nios II, RISC-V
Enable and configure :ref:`Semihosting` (ARM, M68K, Xtensa, MIPS, RISC-V
only).
.. warning::
@ -5113,9 +5113,6 @@ SRST
allows a co-operating external process to access the QEMU memory
region.
The ``share`` is also required for pvrdma devices due to
limitations in the RDMA API provided by Linux.
Setting share=on might affect the ability to configure NUMA
bindings for the memory backend under some circumstances, see
Documentation/vm/numa\_memory\_policy.txt on the Linux kernel

Some files were not shown because too many files have changed in this diff Show More