2020-08-27 01:54:30 +03:00
|
|
|
/* $NetBSD: uipc_usrreq.c,v 1.199 2020/08/26 22:54:30 christos Exp $ */
|
1998-01-08 01:57:09 +03:00
|
|
|
|
|
|
|
/*-
|
2020-02-24 01:14:03 +03:00
|
|
|
* Copyright (c) 1998, 2000, 2004, 2008, 2009, 2020 The NetBSD Foundation, Inc.
|
1998-01-08 01:57:09 +03:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* This code is derived from software contributed to The NetBSD Foundation
|
|
|
|
* by Jason R. Thorpe of the Numerical Aerospace Simulation Facility,
|
2009-03-11 09:05:29 +03:00
|
|
|
* NASA Ames Research Center, and by Andrew Doran.
|
1998-01-08 01:57:09 +03:00
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
|
|
|
|
* ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
|
|
|
|
* TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
|
|
|
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
|
|
|
|
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
|
|
|
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
|
|
|
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
|
|
|
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
|
|
|
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
|
|
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
|
|
|
* POSSIBILITY OF SUCH DAMAGE.
|
|
|
|
*/
|
1994-06-29 10:29:24 +04:00
|
|
|
|
1993-03-21 12:45:37 +03:00
|
|
|
/*
|
1994-05-04 13:50:11 +04:00
|
|
|
* Copyright (c) 1982, 1986, 1989, 1991, 1993
|
|
|
|
* The Regents of the University of California. All rights reserved.
|
1993-03-21 12:45:37 +03:00
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
2003-08-07 20:26:28 +04:00
|
|
|
* 3. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
|
|
|
* @(#)uipc_usrreq.c 8.9 (Berkeley) 5/14/95
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Copyright (c) 1997 Christopher G. Demetriou. All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer.
|
|
|
|
* 2. Redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution.
|
1993-03-21 12:45:37 +03:00
|
|
|
* 3. All advertising materials mentioning features or use of this software
|
|
|
|
* must display the following acknowledgement:
|
|
|
|
* This product includes software developed by the University of
|
|
|
|
* California, Berkeley and its contributors.
|
|
|
|
* 4. Neither the name of the University nor the names of its contributors
|
|
|
|
* may be used to endorse or promote products derived from this software
|
|
|
|
* without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
|
|
|
|
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
|
|
|
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
|
|
|
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
|
|
|
|
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
|
|
|
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
|
|
|
|
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
|
|
|
|
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
|
|
|
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
|
|
|
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
|
|
|
|
* SUCH DAMAGE.
|
|
|
|
*
|
1998-03-01 05:20:01 +03:00
|
|
|
* @(#)uipc_usrreq.c 8.9 (Berkeley) 5/14/95
|
1993-03-21 12:45:37 +03:00
|
|
|
*/
|
|
|
|
|
2001-11-12 18:25:01 +03:00
|
|
|
#include <sys/cdefs.h>
|
2020-08-27 01:54:30 +03:00
|
|
|
__KERNEL_RCSID(0, "$NetBSD: uipc_usrreq.c,v 1.199 2020/08/26 22:54:30 christos Exp $");
|
2017-12-02 11:22:04 +03:00
|
|
|
|
|
|
|
#ifdef _KERNEL_OPT
|
|
|
|
#include "opt_compat_netbsd.h"
|
|
|
|
#endif
|
2001-11-12 18:25:01 +03:00
|
|
|
|
1993-12-18 07:21:37 +03:00
|
|
|
#include <sys/param.h>
|
1994-05-04 13:50:11 +04:00
|
|
|
#include <sys/systm.h>
|
1993-12-18 07:21:37 +03:00
|
|
|
#include <sys/proc.h>
|
|
|
|
#include <sys/filedesc.h>
|
|
|
|
#include <sys/domain.h>
|
|
|
|
#include <sys/protosw.h>
|
|
|
|
#include <sys/socket.h>
|
|
|
|
#include <sys/socketvar.h>
|
|
|
|
#include <sys/unpcb.h>
|
|
|
|
#include <sys/un.h>
|
|
|
|
#include <sys/namei.h>
|
|
|
|
#include <sys/vnode.h>
|
|
|
|
#include <sys/file.h>
|
|
|
|
#include <sys/stat.h>
|
|
|
|
#include <sys/mbuf.h>
|
2006-05-15 01:15:11 +04:00
|
|
|
#include <sys/kauth.h>
|
2007-10-08 19:12:05 +04:00
|
|
|
#include <sys/kmem.h>
|
2008-03-22 00:54:58 +03:00
|
|
|
#include <sys/atomic.h>
|
2008-10-11 17:40:57 +04:00
|
|
|
#include <sys/uidinfo.h>
|
2009-03-11 09:05:29 +03:00
|
|
|
#include <sys/kernel.h>
|
|
|
|
#include <sys/kthread.h>
|
2019-01-27 05:08:33 +03:00
|
|
|
#include <sys/compat_stub.h>
|
1993-03-21 12:45:37 +03:00
|
|
|
|
2016-04-06 22:45:45 +03:00
|
|
|
#include <compat/sys/socket.h>
|
2019-02-20 12:59:39 +03:00
|
|
|
#include <compat/net/route_70.h>
|
2016-04-06 22:45:45 +03:00
|
|
|
|
1993-03-21 12:45:37 +03:00
|
|
|
/*
|
|
|
|
* Unix communications domain.
|
|
|
|
*
|
|
|
|
* TODO:
|
2011-05-29 07:32:46 +04:00
|
|
|
* RDM
|
1993-03-21 12:45:37 +03:00
|
|
|
* rethink name space problems
|
|
|
|
* need a proper out-of-band
|
2008-04-24 15:38:36 +04:00
|
|
|
*
|
|
|
|
* Notes on locking:
|
|
|
|
*
|
|
|
|
* The generic rules noted in uipc_socket2.c apply. In addition:
|
|
|
|
*
|
|
|
|
* o We have a global lock, uipc_lock.
|
|
|
|
*
|
|
|
|
* o All datagram sockets are locked by uipc_lock.
|
|
|
|
*
|
|
|
|
* o For stream socketpairs, the two endpoints are created sharing the same
|
|
|
|
* independent lock. Sockets presented to PRU_CONNECT2 must already have
|
|
|
|
* matching locks.
|
|
|
|
*
|
|
|
|
* o Stream sockets created via socket() start life with their own
|
|
|
|
* independent lock.
|
|
|
|
*
|
|
|
|
* o Stream connections to a named endpoint are slightly more complicated.
|
|
|
|
* Sockets that have called listen() have their lock pointer mutated to
|
|
|
|
* the global uipc_lock. When establishing a connection, the connecting
|
|
|
|
* socket also has its lock mutated to uipc_lock, which matches the head
|
|
|
|
* (listening socket). We create a new socket for accept() to return, and
|
|
|
|
* that also shares the head's lock. Until the connection is completely
|
|
|
|
* done on both ends, all three sockets are locked by uipc_lock. Once the
|
|
|
|
* connection is complete, the association with the head's lock is broken.
|
|
|
|
* The connecting socket and the socket returned from accept() have their
|
|
|
|
* lock pointers mutated away from uipc_lock, and back to the connecting
|
|
|
|
* socket's original, independent lock. The head continues to be locked
|
|
|
|
* by uipc_lock.
|
|
|
|
*
|
|
|
|
* o If uipc_lock is determined to be a significant source of contention,
|
|
|
|
* it could easily be hashed out. It is difficult to simply make it an
|
|
|
|
* independent lock because of visibility / garbage collection issues:
|
|
|
|
* if a socket has been associated with a lock at any point, that lock
|
|
|
|
* must remain valid until the socket is no longer visible in the system.
|
|
|
|
* The lock must not be freed or otherwise destroyed until any sockets
|
|
|
|
* that had referenced it have also been destroyed.
|
1993-03-21 12:45:37 +03:00
|
|
|
*/
|
2006-09-04 01:12:14 +04:00
|
|
|
const struct sockaddr_un sun_noname = {
|
2013-10-08 19:09:51 +04:00
|
|
|
.sun_len = offsetof(struct sockaddr_un, sun_path),
|
2006-09-04 01:12:14 +04:00
|
|
|
.sun_family = AF_LOCAL,
|
|
|
|
};
|
1993-03-21 12:45:37 +03:00
|
|
|
ino_t unp_ino; /* prototype for fake inode numbers */
|
|
|
|
|
2014-07-31 18:12:57 +04:00
|
|
|
static struct mbuf * unp_addsockcred(struct lwp *, struct mbuf *);
|
|
|
|
static void unp_discard_later(file_t *);
|
|
|
|
static void unp_discard_now(file_t *);
|
|
|
|
static void unp_disconnect1(struct unpcb *);
|
|
|
|
static bool unp_drop(struct unpcb *, int);
|
|
|
|
static int unp_internalize(struct mbuf **);
|
|
|
|
static void unp_mark(file_t *);
|
|
|
|
static void unp_scan(struct mbuf *, void (*)(file_t *), int);
|
|
|
|
static void unp_shutdown1(struct unpcb *);
|
|
|
|
static void unp_thread(void *);
|
|
|
|
static void unp_thread_kick(void);
|
|
|
|
|
2008-04-24 15:38:36 +04:00
|
|
|
static kmutex_t *uipc_lock;
|
|
|
|
|
2009-03-11 09:05:29 +03:00
|
|
|
static kcondvar_t unp_thread_cv;
|
|
|
|
static lwp_t *unp_thread_lwp;
|
|
|
|
static SLIST_HEAD(,file) unp_thread_discard;
|
|
|
|
static int unp_defer;
|
|
|
|
|
2019-01-27 05:08:33 +03:00
|
|
|
/* Compat interface */
|
|
|
|
|
|
|
|
struct mbuf * stub_compat_70_unp_addsockcred(lwp_t *, struct mbuf *);
|
|
|
|
|
|
|
|
struct mbuf * stub_compat_70_unp_addsockcred(struct lwp *lwp,
|
|
|
|
struct mbuf *control)
|
|
|
|
{
|
|
|
|
|
|
|
|
/* just copy our initial argument */
|
|
|
|
return control;
|
|
|
|
}
|
|
|
|
|
2019-02-20 12:59:39 +03:00
|
|
|
bool compat70_ocreds_valid = false;
|
2019-01-27 05:08:33 +03:00
|
|
|
|
2008-04-24 15:38:36 +04:00
|
|
|
/*
|
|
|
|
* Initialize Unix protocols.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
uipc_init(void)
|
|
|
|
{
|
2009-03-11 09:05:29 +03:00
|
|
|
int error;
|
2008-04-24 15:38:36 +04:00
|
|
|
|
|
|
|
uipc_lock = mutex_obj_alloc(MUTEX_DEFAULT, IPL_NONE);
|
2009-03-11 09:05:29 +03:00
|
|
|
cv_init(&unp_thread_cv, "unpgc");
|
|
|
|
|
|
|
|
error = kthread_create(PRI_NONE, KTHREAD_MPSAFE, NULL, unp_thread,
|
|
|
|
NULL, &unp_thread_lwp, "unpgc");
|
|
|
|
if (error != 0)
|
|
|
|
panic("uipc_init %d", error);
|
2008-04-24 15:38:36 +04:00
|
|
|
}
|
|
|
|
|
2018-02-17 23:19:36 +03:00
|
|
|
static void
|
|
|
|
unp_connid(struct lwp *l, struct unpcb *unp, int flags)
|
|
|
|
{
|
|
|
|
unp->unp_connid.unp_pid = l->l_proc->p_pid;
|
|
|
|
unp->unp_connid.unp_euid = kauth_cred_geteuid(l->l_cred);
|
|
|
|
unp->unp_connid.unp_egid = kauth_cred_getegid(l->l_cred);
|
|
|
|
unp->unp_flags |= flags;
|
|
|
|
}
|
|
|
|
|
2008-04-24 15:38:36 +04:00
|
|
|
/*
|
|
|
|
* A connection succeeded: disassociate both endpoints from the head's
|
|
|
|
* lock, and make them share their own lock. There is a race here: for
|
|
|
|
* a very brief time one endpoint will be locked by a different lock
|
|
|
|
* than the other end. However, since the current thread holds the old
|
|
|
|
* lock (the listening socket's lock, the head) access can still only be
|
|
|
|
* made to one side of the connection.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
unp_setpeerlocks(struct socket *so, struct socket *so2)
|
|
|
|
{
|
|
|
|
struct unpcb *unp;
|
|
|
|
kmutex_t *lock;
|
|
|
|
|
|
|
|
KASSERT(solocked2(so, so2));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Bail out if either end of the socket is not yet fully
|
|
|
|
* connected or accepted. We only break the lock association
|
|
|
|
* with the head when the pair of sockets stand completely
|
|
|
|
* on their own.
|
|
|
|
*/
|
2009-05-04 10:02:40 +04:00
|
|
|
KASSERT(so->so_head == NULL);
|
|
|
|
if (so2->so_head != NULL)
|
2008-04-24 15:38:36 +04:00
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Drop references to old lock. A third reference (from the
|
|
|
|
* queue head) must be held as we still hold its lock. Bonus:
|
|
|
|
* we don't need to worry about garbage collecting the lock.
|
|
|
|
*/
|
|
|
|
lock = so->so_lock;
|
|
|
|
KASSERT(lock == uipc_lock);
|
|
|
|
mutex_obj_free(lock);
|
|
|
|
mutex_obj_free(lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Grab stream lock from the initiator and share between the two
|
|
|
|
* endpoints. Issue memory barrier to ensure all modifications
|
|
|
|
* become globally visible before the lock change. so2 is
|
|
|
|
* assumed not to have a stream lock, because it was created
|
|
|
|
* purely for the server side to accept this connection and
|
|
|
|
* started out life using the domain-wide lock.
|
|
|
|
*/
|
|
|
|
unp = sotounpcb(so);
|
|
|
|
KASSERT(unp->unp_streamlock != NULL);
|
|
|
|
KASSERT(sotounpcb(so2)->unp_streamlock == NULL);
|
|
|
|
lock = unp->unp_streamlock;
|
|
|
|
unp->unp_streamlock = NULL;
|
|
|
|
mutex_obj_hold(lock);
|
|
|
|
membar_exit();
|
2009-08-27 02:34:47 +04:00
|
|
|
/*
|
|
|
|
* possible race if lock is not held - see comment in
|
|
|
|
* uipc_usrreq(PRU_ACCEPT).
|
|
|
|
*/
|
|
|
|
KASSERT(mutex_owned(lock));
|
2008-06-10 15:49:11 +04:00
|
|
|
solockreset(so, lock);
|
|
|
|
solockreset(so2, lock);
|
2008-04-24 15:38:36 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reset a socket's lock back to the domain-wide lock.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
unp_resetlock(struct socket *so)
|
|
|
|
{
|
|
|
|
kmutex_t *olock, *nlock;
|
|
|
|
struct unpcb *unp;
|
|
|
|
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
|
|
|
|
olock = so->so_lock;
|
|
|
|
nlock = uipc_lock;
|
|
|
|
if (olock == nlock)
|
|
|
|
return;
|
|
|
|
unp = sotounpcb(so);
|
|
|
|
KASSERT(unp->unp_streamlock == NULL);
|
|
|
|
unp->unp_streamlock = olock;
|
|
|
|
mutex_obj_hold(nlock);
|
|
|
|
mutex_enter(nlock);
|
2008-06-10 15:49:11 +04:00
|
|
|
solockreset(so, nlock);
|
2008-04-24 15:38:36 +04:00
|
|
|
mutex_exit(olock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
unp_free(struct unpcb *unp)
|
|
|
|
{
|
|
|
|
if (unp->unp_addr)
|
|
|
|
free(unp->unp_addr, M_SONAME);
|
|
|
|
if (unp->unp_streamlock != NULL)
|
|
|
|
mutex_obj_free(unp->unp_streamlock);
|
2014-05-19 06:51:24 +04:00
|
|
|
kmem_free(unp, sizeof(*unp));
|
2008-04-24 15:38:36 +04:00
|
|
|
}
|
1998-01-08 01:57:09 +03:00
|
|
|
|
2014-07-31 18:12:57 +04:00
|
|
|
static int
|
|
|
|
unp_output(struct mbuf *m, struct mbuf *control, struct unpcb *unp)
|
1996-05-23 20:03:45 +04:00
|
|
|
{
|
|
|
|
struct socket *so2;
|
2004-04-19 02:20:32 +04:00
|
|
|
const struct sockaddr_un *sun;
|
1996-05-23 20:03:45 +04:00
|
|
|
|
2014-06-08 06:52:50 +04:00
|
|
|
/* XXX: server side closed the socket */
|
|
|
|
if (unp->unp_conn == NULL)
|
|
|
|
return ECONNREFUSED;
|
1996-05-23 20:03:45 +04:00
|
|
|
so2 = unp->unp_conn->unp_socket;
|
2008-04-24 15:38:36 +04:00
|
|
|
|
|
|
|
KASSERT(solocked(so2));
|
|
|
|
|
1996-05-23 20:03:45 +04:00
|
|
|
if (unp->unp_addr)
|
|
|
|
sun = unp->unp_addr;
|
|
|
|
else
|
|
|
|
sun = &sun_noname;
|
1998-01-08 01:57:09 +03:00
|
|
|
if (unp->unp_conn->unp_flags & UNP_WANTCRED)
|
2014-07-31 18:12:57 +04:00
|
|
|
control = unp_addsockcred(curlwp, control);
|
2016-04-06 22:45:45 +03:00
|
|
|
if (unp->unp_conn->unp_flags & UNP_OWANTCRED)
|
2019-03-01 14:06:55 +03:00
|
|
|
MODULE_HOOK_CALL(uipc_unp_70_hook, (curlwp, control),
|
2019-01-27 05:08:33 +03:00
|
|
|
stub_compat_70_unp_addsockcred(curlwp, control), control);
|
2005-05-30 02:24:14 +04:00
|
|
|
if (sbappendaddr(&so2->so_rcv, (const struct sockaddr *)sun, m,
|
1996-05-23 20:03:45 +04:00
|
|
|
control) == 0) {
|
2007-08-04 00:49:45 +04:00
|
|
|
unp_dispose(control);
|
1996-05-23 20:03:45 +04:00
|
|
|
m_freem(control);
|
|
|
|
m_freem(m);
|
2018-11-08 07:30:37 +03:00
|
|
|
/* Don't call soroverflow because we're returning this
|
|
|
|
* error directly to the sender. */
|
|
|
|
so2->so_rcv.sb_overflowed++;
|
|
|
|
return ENOBUFS;
|
1996-05-23 20:03:45 +04:00
|
|
|
} else {
|
|
|
|
sorwakeup(so2);
|
2018-11-08 07:30:37 +03:00
|
|
|
return 0;
|
1996-05-23 20:03:45 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-07-31 18:12:57 +04:00
|
|
|
static void
|
2015-04-25 01:32:37 +03:00
|
|
|
unp_setaddr(struct socket *so, struct sockaddr *nam, bool peeraddr)
|
1996-05-23 20:03:45 +04:00
|
|
|
{
|
2015-04-25 01:32:37 +03:00
|
|
|
const struct sockaddr_un *sun = NULL;
|
2008-04-24 15:38:36 +04:00
|
|
|
struct unpcb *unp;
|
1996-05-23 20:03:45 +04:00
|
|
|
|
2009-08-27 02:34:47 +04:00
|
|
|
KASSERT(solocked(so));
|
2008-04-24 15:38:36 +04:00
|
|
|
unp = sotounpcb(so);
|
|
|
|
|
2015-04-25 01:32:37 +03:00
|
|
|
if (peeraddr) {
|
|
|
|
if (unp->unp_conn && unp->unp_conn->unp_addr)
|
|
|
|
sun = unp->unp_conn->unp_addr;
|
|
|
|
} else {
|
|
|
|
if (unp->unp_addr)
|
|
|
|
sun = unp->unp_addr;
|
2008-04-24 15:38:36 +04:00
|
|
|
}
|
2015-04-25 01:32:37 +03:00
|
|
|
if (sun == NULL)
|
|
|
|
sun = &sun_noname;
|
|
|
|
|
|
|
|
memcpy(nam, sun, sun->sun_len);
|
1996-05-23 20:03:45 +04:00
|
|
|
}
|
|
|
|
|
2014-08-08 07:05:44 +04:00
|
|
|
static int
|
|
|
|
unp_rcvd(struct socket *so, int flags, struct lwp *l)
|
|
|
|
{
|
|
|
|
struct unpcb *unp = sotounpcb(so);
|
|
|
|
struct socket *so2;
|
|
|
|
u_int newhiwat;
|
|
|
|
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
KASSERT(unp != NULL);
|
|
|
|
|
|
|
|
switch (so->so_type) {
|
|
|
|
|
|
|
|
case SOCK_DGRAM:
|
|
|
|
panic("uipc 1");
|
|
|
|
/*NOTREACHED*/
|
|
|
|
|
|
|
|
case SOCK_SEQPACKET: /* FALLTHROUGH */
|
|
|
|
case SOCK_STREAM:
|
|
|
|
#define rcv (&so->so_rcv)
|
|
|
|
#define snd (&so2->so_snd)
|
|
|
|
if (unp->unp_conn == 0)
|
|
|
|
break;
|
|
|
|
so2 = unp->unp_conn->unp_socket;
|
|
|
|
KASSERT(solocked2(so, so2));
|
|
|
|
/*
|
|
|
|
* Adjust backpressure on sender
|
|
|
|
* and wakeup any waiting to write.
|
|
|
|
*/
|
|
|
|
snd->sb_mbmax += unp->unp_mbcnt - rcv->sb_mbcnt;
|
|
|
|
unp->unp_mbcnt = rcv->sb_mbcnt;
|
|
|
|
newhiwat = snd->sb_hiwat + unp->unp_cc - rcv->sb_cc;
|
|
|
|
(void)chgsbsize(so2->so_uidinfo,
|
|
|
|
&snd->sb_hiwat, newhiwat, RLIM_INFINITY);
|
|
|
|
unp->unp_cc = rcv->sb_cc;
|
|
|
|
sowwakeup(so2);
|
|
|
|
#undef snd
|
|
|
|
#undef rcv
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
panic("uipc 2");
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-07-23 17:17:18 +04:00
|
|
|
static int
|
|
|
|
unp_recvoob(struct socket *so, struct mbuf *m, int flags)
|
|
|
|
{
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
|
|
|
|
return EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
2014-08-05 11:55:31 +04:00
|
|
|
static int
|
2015-05-02 20:18:03 +03:00
|
|
|
unp_send(struct socket *so, struct mbuf *m, struct sockaddr *nam,
|
2014-08-05 11:55:31 +04:00
|
|
|
struct mbuf *control, struct lwp *l)
|
|
|
|
{
|
|
|
|
struct unpcb *unp = sotounpcb(so);
|
|
|
|
int error = 0;
|
|
|
|
u_int newhiwat;
|
|
|
|
struct socket *so2;
|
|
|
|
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
KASSERT(unp != NULL);
|
|
|
|
KASSERT(m != NULL);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Note: unp_internalize() rejects any control message
|
|
|
|
* other than SCM_RIGHTS, and only allows one. This
|
|
|
|
* has the side-effect of preventing a caller from
|
|
|
|
* forging SCM_CREDS.
|
|
|
|
*/
|
|
|
|
if (control) {
|
|
|
|
sounlock(so);
|
|
|
|
error = unp_internalize(&control);
|
|
|
|
solock(so);
|
|
|
|
if (error != 0) {
|
|
|
|
m_freem(control);
|
|
|
|
m_freem(m);
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (so->so_type) {
|
|
|
|
|
|
|
|
case SOCK_DGRAM: {
|
|
|
|
KASSERT(so->so_lock == uipc_lock);
|
|
|
|
if (nam) {
|
|
|
|
if ((so->so_state & SS_ISCONNECTED) != 0)
|
|
|
|
error = EISCONN;
|
|
|
|
else {
|
|
|
|
/*
|
|
|
|
* Note: once connected, the
|
|
|
|
* socket's lock must not be
|
|
|
|
* dropped until we have sent
|
|
|
|
* the message and disconnected.
|
|
|
|
* This is necessary to prevent
|
|
|
|
* intervening control ops, like
|
|
|
|
* another connection.
|
|
|
|
*/
|
|
|
|
error = unp_connect(so, nam, l);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if ((so->so_state & SS_ISCONNECTED) == 0)
|
|
|
|
error = ENOTCONN;
|
|
|
|
}
|
|
|
|
if (error) {
|
|
|
|
unp_dispose(control);
|
|
|
|
m_freem(control);
|
|
|
|
m_freem(m);
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
error = unp_output(m, control, unp);
|
|
|
|
if (nam)
|
|
|
|
unp_disconnect1(unp);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
case SOCK_SEQPACKET: /* FALLTHROUGH */
|
|
|
|
case SOCK_STREAM:
|
|
|
|
#define rcv (&so2->so_rcv)
|
|
|
|
#define snd (&so->so_snd)
|
|
|
|
if (unp->unp_conn == NULL) {
|
|
|
|
error = ENOTCONN;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
so2 = unp->unp_conn->unp_socket;
|
|
|
|
KASSERT(solocked2(so, so2));
|
|
|
|
if (unp->unp_conn->unp_flags & UNP_WANTCRED) {
|
|
|
|
/*
|
|
|
|
* Credentials are passed only once on
|
|
|
|
* SOCK_STREAM and SOCK_SEQPACKET.
|
|
|
|
*/
|
|
|
|
unp->unp_conn->unp_flags &= ~UNP_WANTCRED;
|
|
|
|
control = unp_addsockcred(l, control);
|
|
|
|
}
|
2016-04-06 22:45:45 +03:00
|
|
|
if (unp->unp_conn->unp_flags & UNP_OWANTCRED) {
|
|
|
|
/*
|
|
|
|
* Credentials are passed only once on
|
|
|
|
* SOCK_STREAM and SOCK_SEQPACKET.
|
|
|
|
*/
|
|
|
|
unp->unp_conn->unp_flags &= ~UNP_OWANTCRED;
|
2019-03-01 14:06:55 +03:00
|
|
|
MODULE_HOOK_CALL(uipc_unp_70_hook, (curlwp, control),
|
2019-01-27 05:08:33 +03:00
|
|
|
stub_compat_70_unp_addsockcred(curlwp, control),
|
|
|
|
control);
|
2016-04-06 22:45:45 +03:00
|
|
|
}
|
2014-08-05 11:55:31 +04:00
|
|
|
/*
|
|
|
|
* Send to paired receive port, and then reduce
|
|
|
|
* send buffer hiwater marks to maintain backpressure.
|
|
|
|
* Wake up readers.
|
|
|
|
*/
|
|
|
|
if (control) {
|
|
|
|
if (sbappendcontrol(rcv, m, control) != 0)
|
|
|
|
control = NULL;
|
|
|
|
} else {
|
|
|
|
switch(so->so_type) {
|
|
|
|
case SOCK_SEQPACKET:
|
|
|
|
sbappendrecord(rcv, m);
|
|
|
|
break;
|
|
|
|
case SOCK_STREAM:
|
|
|
|
sbappend(rcv, m);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
panic("uipc_usrreq");
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
snd->sb_mbmax -=
|
|
|
|
rcv->sb_mbcnt - unp->unp_conn->unp_mbcnt;
|
|
|
|
unp->unp_conn->unp_mbcnt = rcv->sb_mbcnt;
|
|
|
|
newhiwat = snd->sb_hiwat -
|
|
|
|
(rcv->sb_cc - unp->unp_conn->unp_cc);
|
|
|
|
(void)chgsbsize(so->so_uidinfo,
|
|
|
|
&snd->sb_hiwat, newhiwat, RLIM_INFINITY);
|
|
|
|
unp->unp_conn->unp_cc = rcv->sb_cc;
|
|
|
|
sorwakeup(so2);
|
|
|
|
#undef snd
|
|
|
|
#undef rcv
|
|
|
|
if (control != NULL) {
|
|
|
|
unp_dispose(control);
|
|
|
|
m_freem(control);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
panic("uipc 4");
|
|
|
|
}
|
|
|
|
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2014-07-23 17:17:18 +04:00
|
|
|
static int
|
|
|
|
unp_sendoob(struct socket *so, struct mbuf *m, struct mbuf * control)
|
|
|
|
{
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
|
|
|
|
m_freem(m);
|
|
|
|
m_freem(control);
|
|
|
|
|
|
|
|
return EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
1998-01-08 01:57:09 +03:00
|
|
|
/*
|
|
|
|
* Unix domain socket option processing.
|
|
|
|
*/
|
|
|
|
int
|
2008-08-06 19:01:23 +04:00
|
|
|
uipc_ctloutput(int op, struct socket *so, struct sockopt *sopt)
|
1998-01-08 01:57:09 +03:00
|
|
|
{
|
|
|
|
struct unpcb *unp = sotounpcb(so);
|
|
|
|
int optval = 0, error = 0;
|
|
|
|
|
2008-04-24 15:38:36 +04:00
|
|
|
KASSERT(solocked(so));
|
|
|
|
|
2008-08-06 19:01:23 +04:00
|
|
|
if (sopt->sopt_level != 0) {
|
2007-09-19 10:23:53 +04:00
|
|
|
error = ENOPROTOOPT;
|
1998-01-08 01:57:09 +03:00
|
|
|
} else switch (op) {
|
|
|
|
|
|
|
|
case PRCO_SETOPT:
|
2008-08-06 19:01:23 +04:00
|
|
|
switch (sopt->sopt_name) {
|
2019-01-27 05:08:33 +03:00
|
|
|
case LOCAL_OCREDS:
|
2019-02-20 12:59:39 +03:00
|
|
|
if (!compat70_ocreds_valid) {
|
2019-01-27 05:08:33 +03:00
|
|
|
error = ENOPROTOOPT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* FALLTHROUGH */
|
1998-01-08 01:57:09 +03:00
|
|
|
case LOCAL_CREDS:
|
2003-11-29 13:02:42 +03:00
|
|
|
case LOCAL_CONNWAIT:
|
2008-08-06 19:01:23 +04:00
|
|
|
error = sockopt_getint(sopt, &optval);
|
|
|
|
if (error)
|
|
|
|
break;
|
|
|
|
switch (sopt->sopt_name) {
|
1998-01-08 01:57:09 +03:00
|
|
|
#define OPTSET(bit) \
|
|
|
|
if (optval) \
|
|
|
|
unp->unp_flags |= (bit); \
|
|
|
|
else \
|
|
|
|
unp->unp_flags &= ~(bit);
|
|
|
|
|
2008-08-06 19:01:23 +04:00
|
|
|
case LOCAL_CREDS:
|
|
|
|
OPTSET(UNP_WANTCRED);
|
|
|
|
break;
|
|
|
|
case LOCAL_CONNWAIT:
|
|
|
|
OPTSET(UNP_CONNWAIT);
|
|
|
|
break;
|
2016-04-06 22:45:45 +03:00
|
|
|
case LOCAL_OCREDS:
|
|
|
|
OPTSET(UNP_OWANTCRED);
|
|
|
|
break;
|
1998-01-08 01:57:09 +03:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
#undef OPTSET
|
|
|
|
|
|
|
|
default:
|
|
|
|
error = ENOPROTOOPT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case PRCO_GETOPT:
|
2008-04-24 15:38:36 +04:00
|
|
|
sounlock(so);
|
2008-08-06 19:01:23 +04:00
|
|
|
switch (sopt->sopt_name) {
|
2007-08-09 19:23:01 +04:00
|
|
|
case LOCAL_PEEREID:
|
|
|
|
if (unp->unp_flags & UNP_EIDSVALID) {
|
2018-02-17 23:19:36 +03:00
|
|
|
error = sockopt_set(sopt, &unp->unp_connid,
|
|
|
|
sizeof(unp->unp_connid));
|
2007-08-09 19:23:01 +04:00
|
|
|
} else {
|
|
|
|
error = EINVAL;
|
|
|
|
}
|
|
|
|
break;
|
1998-01-08 01:57:09 +03:00
|
|
|
case LOCAL_CREDS:
|
|
|
|
#define OPTBIT(bit) (unp->unp_flags & (bit) ? 1 : 0)
|
|
|
|
|
2007-08-09 19:23:01 +04:00
|
|
|
optval = OPTBIT(UNP_WANTCRED);
|
2008-08-06 19:01:23 +04:00
|
|
|
error = sockopt_setint(sopt, optval);
|
1998-01-08 01:57:09 +03:00
|
|
|
break;
|
2016-04-06 22:45:45 +03:00
|
|
|
case LOCAL_OCREDS:
|
2019-02-20 12:59:39 +03:00
|
|
|
if (compat70_ocreds_valid) {
|
2019-01-27 05:08:33 +03:00
|
|
|
optval = OPTBIT(UNP_OWANTCRED);
|
|
|
|
error = sockopt_setint(sopt, optval);
|
|
|
|
break;
|
|
|
|
}
|
1998-01-08 01:57:09 +03:00
|
|
|
#undef OPTBIT
|
2019-02-04 13:09:31 +03:00
|
|
|
/* FALLTHROUGH */
|
1998-01-08 01:57:09 +03:00
|
|
|
default:
|
|
|
|
error = ENOPROTOOPT;
|
|
|
|
break;
|
|
|
|
}
|
2008-04-24 15:38:36 +04:00
|
|
|
solock(so);
|
1998-01-08 01:57:09 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
1993-03-21 12:45:37 +03:00
|
|
|
/*
|
|
|
|
* Both send and receive buffers are allocated PIPSIZ bytes of buffering
|
|
|
|
* for stream sockets, although the total for sender and receiver is
|
|
|
|
* actually only PIPSIZ.
|
|
|
|
* Datagram sockets really use the sendspace as the maximum datagram size,
|
|
|
|
* and don't really want to reserve the sendspace. Their recvspace should
|
|
|
|
* be large enough for at least one max-size datagram plus address.
|
|
|
|
*/
|
2018-05-05 22:58:08 +03:00
|
|
|
#ifndef PIPSIZ
|
|
|
|
#define PIPSIZ 8192
|
|
|
|
#endif
|
1993-03-21 12:45:37 +03:00
|
|
|
u_long unpst_sendspace = PIPSIZ;
|
|
|
|
u_long unpst_recvspace = PIPSIZ;
|
|
|
|
u_long unpdg_sendspace = 2*1024; /* really max datagram size */
|
2018-05-11 12:43:59 +03:00
|
|
|
u_long unpdg_recvspace = 16*1024;
|
1993-03-21 12:45:37 +03:00
|
|
|
|
2009-03-11 09:05:29 +03:00
|
|
|
u_int unp_rights; /* files in flight */
|
|
|
|
u_int unp_rights_ratio = 2; /* limit, fraction of maxfiles */
|
1993-03-21 12:45:37 +03:00
|
|
|
|
2014-05-19 06:51:24 +04:00
|
|
|
static int
|
|
|
|
unp_attach(struct socket *so, int proto)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
2014-05-19 06:51:24 +04:00
|
|
|
struct unpcb *unp = sotounpcb(so);
|
|
|
|
u_long sndspc, rcvspc;
|
1993-03-21 12:45:37 +03:00
|
|
|
int error;
|
2005-02-27 00:34:55 +03:00
|
|
|
|
2014-05-19 06:51:24 +04:00
|
|
|
KASSERT(unp == NULL);
|
|
|
|
|
2008-04-24 15:38:36 +04:00
|
|
|
switch (so->so_type) {
|
2014-05-19 06:51:24 +04:00
|
|
|
case SOCK_SEQPACKET:
|
|
|
|
/* FALLTHROUGH */
|
2008-04-24 15:38:36 +04:00
|
|
|
case SOCK_STREAM:
|
|
|
|
if (so->so_lock == NULL) {
|
|
|
|
so->so_lock = mutex_obj_alloc(MUTEX_DEFAULT, IPL_NONE);
|
|
|
|
solock(so);
|
|
|
|
}
|
2014-05-19 06:51:24 +04:00
|
|
|
sndspc = unpst_sendspace;
|
|
|
|
rcvspc = unpst_recvspace;
|
2008-04-24 15:38:36 +04:00
|
|
|
break;
|
1993-03-21 12:45:37 +03:00
|
|
|
|
2008-04-24 15:38:36 +04:00
|
|
|
case SOCK_DGRAM:
|
|
|
|
if (so->so_lock == NULL) {
|
|
|
|
mutex_obj_hold(uipc_lock);
|
|
|
|
so->so_lock = uipc_lock;
|
|
|
|
solock(so);
|
|
|
|
}
|
2014-05-19 06:51:24 +04:00
|
|
|
sndspc = unpdg_sendspace;
|
|
|
|
rcvspc = unpdg_recvspace;
|
2008-04-24 15:38:36 +04:00
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
panic("unp_attach");
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
2014-05-19 06:51:24 +04:00
|
|
|
|
|
|
|
if (so->so_snd.sb_hiwat == 0 || so->so_rcv.sb_hiwat == 0) {
|
|
|
|
error = soreserve(so, sndspc, rcvspc);
|
|
|
|
if (error) {
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
unp = kmem_zalloc(sizeof(*unp), KM_SLEEP);
|
|
|
|
nanotime(&unp->unp_ctime);
|
1993-03-21 12:45:37 +03:00
|
|
|
unp->unp_socket = so;
|
1995-08-17 06:57:20 +04:00
|
|
|
so->so_pcb = unp;
|
2014-05-19 06:51:24 +04:00
|
|
|
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
return 0;
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
|
2014-05-19 06:51:24 +04:00
|
|
|
static void
|
|
|
|
unp_detach(struct socket *so)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
2014-05-19 06:51:24 +04:00
|
|
|
struct unpcb *unp;
|
2008-04-24 15:38:36 +04:00
|
|
|
vnode_t *vp;
|
2005-02-27 00:34:55 +03:00
|
|
|
|
2014-05-19 06:51:24 +04:00
|
|
|
unp = sotounpcb(so);
|
|
|
|
KASSERT(unp != NULL);
|
|
|
|
KASSERT(solocked(so));
|
2008-04-24 15:38:36 +04:00
|
|
|
retry:
|
|
|
|
if ((vp = unp->unp_vnode) != NULL) {
|
|
|
|
sounlock(so);
|
|
|
|
/* Acquire v_interlock to protect against unp_connect(). */
|
2008-04-27 15:29:12 +04:00
|
|
|
/* XXXAD racy */
|
2011-06-12 07:35:36 +04:00
|
|
|
mutex_enter(vp->v_interlock);
|
2008-04-24 15:38:36 +04:00
|
|
|
vp->v_socket = NULL;
|
2013-10-29 13:53:51 +04:00
|
|
|
mutex_exit(vp->v_interlock);
|
|
|
|
vrele(vp);
|
2008-04-24 15:38:36 +04:00
|
|
|
solock(so);
|
|
|
|
unp->unp_vnode = NULL;
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
if (unp->unp_conn)
|
split PRU_DISCONNECT, PRU_SHUTDOWN and PRU_ABORT function out of
pr_generic() usrreq switches and put into separate functions
xxx_disconnect(struct socket *)
xxx_shutdown(struct socket *)
xxx_abort(struct socket *)
- always KASSERT(solocked(so)) even if not implemented
- replace calls to pr_generic() with req =
PRU_{DISCONNECT,SHUTDOWN,ABORT}
with calls to pr_{disconnect,shutdown,abort}() respectively
rename existing internal functions used to implement above functionality
to permit use of the names for xxx_{disconnect,shutdown,abort}().
- {l2cap,sco,rfcomm}_disconnect() ->
{l2cap,sco,rfcomm}_disconnect_pcb()
- {unp,rip,tcp}_disconnect() -> {unp,rip,tcp}_disconnect1()
- unp_shutdown() -> unp_shutdown1()
patch reviewed by rmind
2014-07-31 07:39:35 +04:00
|
|
|
unp_disconnect1(unp);
|
2008-04-24 15:38:36 +04:00
|
|
|
while (unp->unp_refs) {
|
|
|
|
KASSERT(solocked2(so, unp->unp_refs->unp_socket));
|
|
|
|
if (unp_drop(unp->unp_refs, ECONNRESET)) {
|
|
|
|
solock(so);
|
|
|
|
goto retry;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
soisdisconnected(so);
|
|
|
|
so->so_pcb = NULL;
|
1994-05-04 13:50:11 +04:00
|
|
|
if (unp_rights) {
|
|
|
|
/*
|
2009-03-11 09:05:29 +03:00
|
|
|
* Normally the receive buffer is flushed later, in sofree,
|
|
|
|
* but if our receive buffer holds references to files that
|
|
|
|
* are now garbage, we will enqueue those file references to
|
|
|
|
* the garbage collector and kick it into action.
|
1994-05-04 13:50:11 +04:00
|
|
|
*/
|
2008-04-24 15:38:36 +04:00
|
|
|
sorflush(so);
|
|
|
|
unp_free(unp);
|
2009-03-11 09:05:29 +03:00
|
|
|
unp_thread_kick();
|
1995-08-16 04:29:50 +04:00
|
|
|
} else
|
2008-04-24 15:38:36 +04:00
|
|
|
unp_free(unp);
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
|
2014-07-09 18:41:42 +04:00
|
|
|
static int
|
2015-04-25 01:32:37 +03:00
|
|
|
unp_accept(struct socket *so, struct sockaddr *nam)
|
2014-07-09 18:41:42 +04:00
|
|
|
{
|
|
|
|
struct unpcb *unp = sotounpcb(so);
|
|
|
|
struct socket *so2;
|
|
|
|
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
KASSERT(nam != NULL);
|
|
|
|
|
|
|
|
/* XXX code review required to determine if unp can ever be NULL */
|
|
|
|
if (unp == NULL)
|
|
|
|
return EINVAL;
|
|
|
|
|
|
|
|
KASSERT(so->so_lock == uipc_lock);
|
|
|
|
/*
|
|
|
|
* Mark the initiating STREAM socket as connected *ONLY*
|
|
|
|
* after it's been accepted. This prevents a client from
|
|
|
|
* overrunning a server and receiving ECONNREFUSED.
|
|
|
|
*/
|
|
|
|
if (unp->unp_conn == NULL) {
|
|
|
|
/*
|
|
|
|
* This will use the empty socket and will not
|
|
|
|
* allocate.
|
|
|
|
*/
|
|
|
|
unp_setaddr(so, nam, true);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
so2 = unp->unp_conn->unp_socket;
|
|
|
|
if (so2->so_state & SS_ISCONNECTING) {
|
|
|
|
KASSERT(solocked2(so, so->so_head));
|
|
|
|
KASSERT(solocked2(so2, so->so_head));
|
|
|
|
soisconnected(so2);
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* If the connection is fully established, break the
|
|
|
|
* association with uipc_lock and give the connected
|
|
|
|
* pair a separate lock to share.
|
|
|
|
* There is a race here: sotounpcb(so2)->unp_streamlock
|
|
|
|
* is not locked, so when changing so2->so_lock
|
|
|
|
* another thread can grab it while so->so_lock is still
|
|
|
|
* pointing to the (locked) uipc_lock.
|
|
|
|
* this should be harmless, except that this makes
|
|
|
|
* solocked2() and solocked() unreliable.
|
|
|
|
* Another problem is that unp_setaddr() expects the
|
|
|
|
* the socket locked. Grabing sotounpcb(so2)->unp_streamlock
|
|
|
|
* fixes both issues.
|
|
|
|
*/
|
|
|
|
mutex_enter(sotounpcb(so2)->unp_streamlock);
|
|
|
|
unp_setpeerlocks(so2, so);
|
|
|
|
/*
|
|
|
|
* Only now return peer's address, as we may need to
|
|
|
|
* block in order to allocate memory.
|
|
|
|
*
|
|
|
|
* XXX Minor race: connection can be broken while
|
|
|
|
* lock is dropped in unp_setaddr(). We will return
|
|
|
|
* error == 0 and sun_noname as the peer address.
|
|
|
|
*/
|
|
|
|
unp_setaddr(so, nam, true);
|
|
|
|
/* so_lock now points to unp_streamlock */
|
|
|
|
mutex_exit(so2->so_lock);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-06-22 12:10:18 +04:00
|
|
|
static int
|
2014-07-01 09:49:18 +04:00
|
|
|
unp_ioctl(struct socket *so, u_long cmd, void *nam, struct ifnet *ifp)
|
2014-06-22 12:10:18 +04:00
|
|
|
{
|
|
|
|
return EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
2014-07-06 07:33:33 +04:00
|
|
|
static int
|
|
|
|
unp_stat(struct socket *so, struct stat *ub)
|
|
|
|
{
|
|
|
|
struct unpcb *unp;
|
|
|
|
struct socket *so2;
|
|
|
|
|
2014-07-07 21:13:56 +04:00
|
|
|
KASSERT(solocked(so));
|
|
|
|
|
2014-07-06 07:33:33 +04:00
|
|
|
unp = sotounpcb(so);
|
|
|
|
if (unp == NULL)
|
|
|
|
return EINVAL;
|
|
|
|
|
|
|
|
ub->st_blksize = so->so_snd.sb_hiwat;
|
|
|
|
switch (so->so_type) {
|
|
|
|
case SOCK_SEQPACKET: /* FALLTHROUGH */
|
|
|
|
case SOCK_STREAM:
|
|
|
|
if (unp->unp_conn == 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
so2 = unp->unp_conn->unp_socket;
|
|
|
|
KASSERT(solocked2(so, so2));
|
|
|
|
ub->st_blksize += so2->so_rcv.sb_cc;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
ub->st_dev = NODEV;
|
|
|
|
if (unp->unp_ino == 0)
|
|
|
|
unp->unp_ino = unp_ino++;
|
|
|
|
ub->st_atimespec = ub->st_mtimespec = ub->st_ctimespec = unp->unp_ctime;
|
|
|
|
ub->st_ino = unp->unp_ino;
|
2020-08-27 01:54:30 +03:00
|
|
|
ub->st_uid = so->so_uidinfo->ui_uid;
|
|
|
|
ub->st_gid = so->so_egid;
|
2014-07-06 07:33:33 +04:00
|
|
|
return (0);
|
|
|
|
}
|
|
|
|
|
* split PRU_PEERADDR and PRU_SOCKADDR function out of pr_generic()
usrreq switches and put into separate functions
xxx_{peer,sock}addr(struct socket *, struct mbuf *).
- KASSERT(solocked(so)) always in new functions even if request
is not implemented
- KASSERT(pcb != NULL) and KASSERT(nam) if the request is
implemented and not for tcp.
* for tcp roll #ifdef KPROF and #ifdef DEBUG code from tcp_usrreq() into
easier to cut & paste functions tcp_debug_capture() and
tcp_debug_trace()
- functions provided by rmind
- remaining use of PRU_{PEER,SOCK}ADDR #define to be removed in a
future commit.
* rename netbt functions to permit consistency of pru function names
(as has been done with other requests already split out).
- l2cap_{peer,sock}addr() -> l2cap_{peer,sock}_addr_pcb()
- rfcomm_{peer,sock}addr() -> rfcomm_{peer,sock}_addr_pcb()
- sco_{peer,sock}addr() -> sco_{peer,sock}_addr_pcb()
* split/refactor do_sys_getsockname(lwp, fd, which, nam) into
two functions do_sys_get{peer,sock}name(fd, nam).
- move PRU_PEERADDR handling into do_sys_getpeername() from
do_sys_getsockname()
- have svr4_stream directly call do_sys_get{sock,peer}name()
respectively instead of providing `which' & fix a DPRINTF string
that incorrectly wrote "getpeername" when it meant "getsockname"
- fix sys_getpeername() and sys_getsockname() to call
do_sys_get{sock,peer}name() without `which' and `lwp' & adjust
comments
- bump kernel version for removal of lwp & which parameters from
do_sys_getsockname()
note: future cleanup to remove struct mbuf * abuse in
xxx_{peer,sock}name()
still to come, not done in this commit since it is easier to do post
split.
patch reviewed by rmind
welcome to 6.99.47
2014-07-09 08:54:03 +04:00
|
|
|
static int
|
2015-04-25 01:32:37 +03:00
|
|
|
unp_peeraddr(struct socket *so, struct sockaddr *nam)
|
* split PRU_PEERADDR and PRU_SOCKADDR function out of pr_generic()
usrreq switches and put into separate functions
xxx_{peer,sock}addr(struct socket *, struct mbuf *).
- KASSERT(solocked(so)) always in new functions even if request
is not implemented
- KASSERT(pcb != NULL) and KASSERT(nam) if the request is
implemented and not for tcp.
* for tcp roll #ifdef KPROF and #ifdef DEBUG code from tcp_usrreq() into
easier to cut & paste functions tcp_debug_capture() and
tcp_debug_trace()
- functions provided by rmind
- remaining use of PRU_{PEER,SOCK}ADDR #define to be removed in a
future commit.
* rename netbt functions to permit consistency of pru function names
(as has been done with other requests already split out).
- l2cap_{peer,sock}addr() -> l2cap_{peer,sock}_addr_pcb()
- rfcomm_{peer,sock}addr() -> rfcomm_{peer,sock}_addr_pcb()
- sco_{peer,sock}addr() -> sco_{peer,sock}_addr_pcb()
* split/refactor do_sys_getsockname(lwp, fd, which, nam) into
two functions do_sys_get{peer,sock}name(fd, nam).
- move PRU_PEERADDR handling into do_sys_getpeername() from
do_sys_getsockname()
- have svr4_stream directly call do_sys_get{sock,peer}name()
respectively instead of providing `which' & fix a DPRINTF string
that incorrectly wrote "getpeername" when it meant "getsockname"
- fix sys_getpeername() and sys_getsockname() to call
do_sys_get{sock,peer}name() without `which' and `lwp' & adjust
comments
- bump kernel version for removal of lwp & which parameters from
do_sys_getsockname()
note: future cleanup to remove struct mbuf * abuse in
xxx_{peer,sock}name()
still to come, not done in this commit since it is easier to do post
split.
patch reviewed by rmind
welcome to 6.99.47
2014-07-09 08:54:03 +04:00
|
|
|
{
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
KASSERT(sotounpcb(so) != NULL);
|
|
|
|
KASSERT(nam != NULL);
|
|
|
|
|
|
|
|
unp_setaddr(so, nam, true);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2015-04-25 01:32:37 +03:00
|
|
|
unp_sockaddr(struct socket *so, struct sockaddr *nam)
|
* split PRU_PEERADDR and PRU_SOCKADDR function out of pr_generic()
usrreq switches and put into separate functions
xxx_{peer,sock}addr(struct socket *, struct mbuf *).
- KASSERT(solocked(so)) always in new functions even if request
is not implemented
- KASSERT(pcb != NULL) and KASSERT(nam) if the request is
implemented and not for tcp.
* for tcp roll #ifdef KPROF and #ifdef DEBUG code from tcp_usrreq() into
easier to cut & paste functions tcp_debug_capture() and
tcp_debug_trace()
- functions provided by rmind
- remaining use of PRU_{PEER,SOCK}ADDR #define to be removed in a
future commit.
* rename netbt functions to permit consistency of pru function names
(as has been done with other requests already split out).
- l2cap_{peer,sock}addr() -> l2cap_{peer,sock}_addr_pcb()
- rfcomm_{peer,sock}addr() -> rfcomm_{peer,sock}_addr_pcb()
- sco_{peer,sock}addr() -> sco_{peer,sock}_addr_pcb()
* split/refactor do_sys_getsockname(lwp, fd, which, nam) into
two functions do_sys_get{peer,sock}name(fd, nam).
- move PRU_PEERADDR handling into do_sys_getpeername() from
do_sys_getsockname()
- have svr4_stream directly call do_sys_get{sock,peer}name()
respectively instead of providing `which' & fix a DPRINTF string
that incorrectly wrote "getpeername" when it meant "getsockname"
- fix sys_getpeername() and sys_getsockname() to call
do_sys_get{sock,peer}name() without `which' and `lwp' & adjust
comments
- bump kernel version for removal of lwp & which parameters from
do_sys_getsockname()
note: future cleanup to remove struct mbuf * abuse in
xxx_{peer,sock}name()
still to come, not done in this commit since it is easier to do post
split.
patch reviewed by rmind
welcome to 6.99.47
2014-07-09 08:54:03 +04:00
|
|
|
{
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
KASSERT(sotounpcb(so) != NULL);
|
|
|
|
KASSERT(nam != NULL);
|
|
|
|
|
|
|
|
unp_setaddr(so, nam, false);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-04-03 23:01:07 +03:00
|
|
|
/*
|
|
|
|
* we only need to perform this allocation until syscalls other than
|
|
|
|
* bind are adjusted to use sockaddr_big.
|
|
|
|
*/
|
|
|
|
static struct sockaddr_un *
|
|
|
|
makeun_sb(struct sockaddr *nam, size_t *addrlen)
|
|
|
|
{
|
|
|
|
struct sockaddr_un *sun;
|
|
|
|
|
|
|
|
*addrlen = nam->sa_len + 1;
|
|
|
|
sun = malloc(*addrlen, M_SONAME, M_WAITOK);
|
|
|
|
memcpy(sun, nam, nam->sa_len);
|
|
|
|
*(((char *)sun) + nam->sa_len) = '\0';
|
|
|
|
return sun;
|
|
|
|
}
|
|
|
|
|
2014-07-31 18:12:57 +04:00
|
|
|
static int
|
2015-04-03 23:01:07 +03:00
|
|
|
unp_bind(struct socket *so, struct sockaddr *nam, struct lwp *l)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
1997-06-26 10:06:40 +04:00
|
|
|
struct sockaddr_un *sun;
|
2008-04-24 15:38:36 +04:00
|
|
|
struct unpcb *unp;
|
2008-03-22 00:54:58 +03:00
|
|
|
vnode_t *vp;
|
1993-03-21 12:45:37 +03:00
|
|
|
struct vattr vattr;
|
1997-06-26 10:06:40 +04:00
|
|
|
size_t addrlen;
|
1993-03-21 12:45:37 +03:00
|
|
|
int error;
|
2010-11-19 09:44:33 +03:00
|
|
|
struct pathbuf *pb;
|
1993-03-21 12:45:37 +03:00
|
|
|
struct nameidata nd;
|
2008-04-24 15:38:36 +04:00
|
|
|
proc_t *p;
|
1993-03-21 12:45:37 +03:00
|
|
|
|
2008-04-24 15:38:36 +04:00
|
|
|
unp = sotounpcb(so);
|
2014-07-24 19:12:03 +04:00
|
|
|
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
KASSERT(unp != NULL);
|
|
|
|
KASSERT(nam != NULL);
|
|
|
|
|
2008-04-24 15:38:36 +04:00
|
|
|
if (unp->unp_vnode != NULL)
|
1993-03-21 12:45:37 +03:00
|
|
|
return (EINVAL);
|
2008-03-28 15:14:22 +03:00
|
|
|
if ((unp->unp_flags & UNP_BUSY) != 0) {
|
|
|
|
/*
|
|
|
|
* EALREADY may not be strictly accurate, but since this
|
|
|
|
* is a major application error it's hardly a big deal.
|
|
|
|
*/
|
|
|
|
return (EALREADY);
|
|
|
|
}
|
|
|
|
unp->unp_flags |= UNP_BUSY;
|
2008-04-24 15:38:36 +04:00
|
|
|
sounlock(so);
|
2008-03-28 15:14:22 +03:00
|
|
|
|
2014-08-05 09:24:26 +04:00
|
|
|
p = l->l_proc;
|
2015-04-03 23:01:07 +03:00
|
|
|
sun = makeun_sb(nam, &addrlen);
|
1997-06-26 10:06:40 +04:00
|
|
|
|
2010-11-19 09:44:33 +03:00
|
|
|
pb = pathbuf_create(sun->sun_path);
|
|
|
|
if (pb == NULL) {
|
|
|
|
error = ENOMEM;
|
|
|
|
goto bad;
|
|
|
|
}
|
|
|
|
NDINIT(&nd, CREATE, FOLLOW | LOCKPARENT | TRYEMULROOT, pb);
|
1997-06-26 10:06:40 +04:00
|
|
|
|
1993-03-21 12:45:37 +03:00
|
|
|
/* SHOULD BE ABLE TO ADOPT EXISTING AND wakeup() ALA FIFO's */
|
2010-11-19 09:44:33 +03:00
|
|
|
if ((error = namei(&nd)) != 0) {
|
|
|
|
pathbuf_destroy(pb);
|
1997-06-26 10:06:40 +04:00
|
|
|
goto bad;
|
2010-11-19 09:44:33 +03:00
|
|
|
}
|
1994-06-08 15:28:29 +04:00
|
|
|
vp = nd.ni_vp;
|
2007-04-03 20:11:31 +04:00
|
|
|
if (vp != NULL) {
|
1994-06-08 15:28:29 +04:00
|
|
|
VOP_ABORTOP(nd.ni_dvp, &nd.ni_cnd);
|
|
|
|
if (nd.ni_dvp == vp)
|
|
|
|
vrele(nd.ni_dvp);
|
1993-03-21 12:45:37 +03:00
|
|
|
else
|
1994-06-08 15:28:29 +04:00
|
|
|
vput(nd.ni_dvp);
|
1993-03-21 12:45:37 +03:00
|
|
|
vrele(vp);
|
2010-11-19 09:44:33 +03:00
|
|
|
pathbuf_destroy(pb);
|
2007-04-03 20:11:31 +04:00
|
|
|
error = EADDRINUSE;
|
|
|
|
goto bad;
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
2010-01-08 14:35:07 +03:00
|
|
|
vattr_null(&vattr);
|
1993-03-21 12:45:37 +03:00
|
|
|
vattr.va_type = VSOCK;
|
2005-08-30 19:03:04 +04:00
|
|
|
vattr.va_mode = ACCESSPERMS & ~(p->p_cwdi->cwdi_cmask);
|
1996-02-04 05:17:43 +03:00
|
|
|
error = VOP_CREATE(nd.ni_dvp, &nd.ni_vp, &nd.ni_cnd, &vattr);
|
2010-11-19 09:44:33 +03:00
|
|
|
if (error) {
|
2014-01-17 14:55:01 +04:00
|
|
|
vput(nd.ni_dvp);
|
2010-11-19 09:44:33 +03:00
|
|
|
pathbuf_destroy(pb);
|
1997-06-26 10:06:40 +04:00
|
|
|
goto bad;
|
2010-11-19 09:44:33 +03:00
|
|
|
}
|
1994-06-08 15:28:29 +04:00
|
|
|
vp = nd.ni_vp;
|
2014-01-23 14:13:55 +04:00
|
|
|
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
|
2008-04-24 15:38:36 +04:00
|
|
|
solock(so);
|
1993-03-21 12:45:37 +03:00
|
|
|
vp->v_socket = unp->unp_socket;
|
|
|
|
unp->unp_vnode = vp;
|
1997-06-26 10:06:40 +04:00
|
|
|
unp->unp_addrlen = addrlen;
|
|
|
|
unp->unp_addr = sun;
|
2010-06-24 16:58:48 +04:00
|
|
|
VOP_UNLOCK(vp);
|
2014-01-17 14:55:01 +04:00
|
|
|
vput(nd.ni_dvp);
|
2008-03-28 15:14:22 +03:00
|
|
|
unp->unp_flags &= ~UNP_BUSY;
|
2010-11-19 09:44:33 +03:00
|
|
|
pathbuf_destroy(pb);
|
1993-03-21 12:45:37 +03:00
|
|
|
return (0);
|
1997-06-26 10:06:40 +04:00
|
|
|
|
|
|
|
bad:
|
|
|
|
free(sun, M_SONAME);
|
2008-04-24 15:38:36 +04:00
|
|
|
solock(so);
|
2008-03-28 15:14:22 +03:00
|
|
|
unp->unp_flags &= ~UNP_BUSY;
|
1997-06-26 10:06:40 +04:00
|
|
|
return (error);
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
|
2014-07-24 19:12:03 +04:00
|
|
|
static int
|
2014-08-05 09:24:26 +04:00
|
|
|
unp_listen(struct socket *so, struct lwp *l)
|
2014-07-24 19:12:03 +04:00
|
|
|
{
|
|
|
|
struct unpcb *unp = sotounpcb(so);
|
|
|
|
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
KASSERT(unp != NULL);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the socket can accept a connection, it must be
|
|
|
|
* locked by uipc_lock.
|
|
|
|
*/
|
|
|
|
unp_resetlock(so);
|
|
|
|
if (unp->unp_vnode == NULL)
|
|
|
|
return EINVAL;
|
|
|
|
|
2018-02-17 23:19:36 +03:00
|
|
|
unp_connid(l, unp, UNP_EIDSBIND);
|
2014-07-24 19:12:03 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
split PRU_DISCONNECT, PRU_SHUTDOWN and PRU_ABORT function out of
pr_generic() usrreq switches and put into separate functions
xxx_disconnect(struct socket *)
xxx_shutdown(struct socket *)
xxx_abort(struct socket *)
- always KASSERT(solocked(so)) even if not implemented
- replace calls to pr_generic() with req =
PRU_{DISCONNECT,SHUTDOWN,ABORT}
with calls to pr_{disconnect,shutdown,abort}() respectively
rename existing internal functions used to implement above functionality
to permit use of the names for xxx_{disconnect,shutdown,abort}().
- {l2cap,sco,rfcomm}_disconnect() ->
{l2cap,sco,rfcomm}_disconnect_pcb()
- {unp,rip,tcp}_disconnect() -> {unp,rip,tcp}_disconnect1()
- unp_shutdown() -> unp_shutdown1()
patch reviewed by rmind
2014-07-31 07:39:35 +04:00
|
|
|
static int
|
|
|
|
unp_disconnect(struct socket *so)
|
|
|
|
{
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
KASSERT(sotounpcb(so) != NULL);
|
|
|
|
|
|
|
|
unp_disconnect1(sotounpcb(so));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
unp_shutdown(struct socket *so)
|
|
|
|
{
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
KASSERT(sotounpcb(so) != NULL);
|
|
|
|
|
|
|
|
socantsendmore(so);
|
|
|
|
unp_shutdown1(sotounpcb(so));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
unp_abort(struct socket *so)
|
|
|
|
{
|
|
|
|
KASSERT(solocked(so));
|
|
|
|
KASSERT(sotounpcb(so) != NULL);
|
|
|
|
|
|
|
|
(void)unp_drop(sotounpcb(so), ECONNABORTED);
|
|
|
|
KASSERT(so->so_head == NULL);
|
|
|
|
KASSERT(so->so_pcb != NULL);
|
|
|
|
unp_detach(so);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-08-09 09:33:00 +04:00
|
|
|
static int
|
2015-02-02 05:28:26 +03:00
|
|
|
unp_connect1(struct socket *so, struct socket *so2, struct lwp *l)
|
2014-08-09 09:33:00 +04:00
|
|
|
{
|
|
|
|
struct unpcb *unp = sotounpcb(so);
|
|
|
|
struct unpcb *unp2;
|
|
|
|
|
|
|
|
if (so2->so_type != so->so_type)
|
|
|
|
return EPROTOTYPE;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* All three sockets involved must be locked by same lock:
|
|
|
|
*
|
|
|
|
* local endpoint (so)
|
|
|
|
* remote endpoint (so2)
|
|
|
|
* queue head (so2->so_head, only if PR_CONNREQUIRED)
|
|
|
|
*/
|
|
|
|
KASSERT(solocked2(so, so2));
|
|
|
|
KASSERT(so->so_head == NULL);
|
|
|
|
if (so2->so_head != NULL) {
|
|
|
|
KASSERT(so2->so_lock == uipc_lock);
|
|
|
|
KASSERT(solocked2(so2, so2->so_head));
|
|
|
|
}
|
|
|
|
|
|
|
|
unp2 = sotounpcb(so2);
|
|
|
|
unp->unp_conn = unp2;
|
2015-02-02 05:28:26 +03:00
|
|
|
|
2014-08-09 09:33:00 +04:00
|
|
|
switch (so->so_type) {
|
|
|
|
|
|
|
|
case SOCK_DGRAM:
|
|
|
|
unp->unp_nextref = unp2->unp_refs;
|
|
|
|
unp2->unp_refs = unp;
|
|
|
|
soisconnected(so);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case SOCK_SEQPACKET: /* FALLTHROUGH */
|
|
|
|
case SOCK_STREAM:
|
|
|
|
|
|
|
|
/*
|
|
|
|
* SOCK_SEQPACKET and SOCK_STREAM cases are handled by callers
|
|
|
|
* which are unp_connect() or unp_connect2().
|
|
|
|
*/
|
|
|
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
panic("unp_connect1");
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
1993-06-27 10:01:27 +04:00
|
|
|
int
|
2015-05-02 20:18:03 +03:00
|
|
|
unp_connect(struct socket *so, struct sockaddr *nam, struct lwp *l)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
2000-03-30 13:27:11 +04:00
|
|
|
struct sockaddr_un *sun;
|
2008-03-22 00:54:58 +03:00
|
|
|
vnode_t *vp;
|
2000-03-30 13:27:11 +04:00
|
|
|
struct socket *so2, *so3;
|
2007-08-09 19:23:01 +04:00
|
|
|
struct unpcb *unp, *unp2, *unp3;
|
1997-06-26 10:06:40 +04:00
|
|
|
size_t addrlen;
|
1993-03-21 12:45:37 +03:00
|
|
|
int error;
|
2010-11-19 09:44:33 +03:00
|
|
|
struct pathbuf *pb;
|
1993-03-21 12:45:37 +03:00
|
|
|
struct nameidata nd;
|
|
|
|
|
2008-03-28 15:14:22 +03:00
|
|
|
unp = sotounpcb(so);
|
|
|
|
if ((unp->unp_flags & UNP_BUSY) != 0) {
|
|
|
|
/*
|
|
|
|
* EALREADY may not be strictly accurate, but since this
|
|
|
|
* is a major application error it's hardly a big deal.
|
|
|
|
*/
|
|
|
|
return (EALREADY);
|
|
|
|
}
|
|
|
|
unp->unp_flags |= UNP_BUSY;
|
2008-04-24 15:38:36 +04:00
|
|
|
sounlock(so);
|
2008-03-28 15:14:22 +03:00
|
|
|
|
2015-05-02 20:18:03 +03:00
|
|
|
sun = makeun_sb(nam, &addrlen);
|
2010-11-19 09:44:33 +03:00
|
|
|
pb = pathbuf_create(sun->sun_path);
|
|
|
|
if (pb == NULL) {
|
|
|
|
error = ENOMEM;
|
|
|
|
goto bad2;
|
|
|
|
}
|
1997-06-26 10:06:40 +04:00
|
|
|
|
2010-11-19 09:44:33 +03:00
|
|
|
NDINIT(&nd, LOOKUP, FOLLOW | LOCKLEAF | TRYEMULROOT, pb);
|
|
|
|
|
|
|
|
if ((error = namei(&nd)) != 0) {
|
|
|
|
pathbuf_destroy(pb);
|
1997-06-26 10:06:40 +04:00
|
|
|
goto bad2;
|
2010-11-19 09:44:33 +03:00
|
|
|
}
|
1994-06-08 15:28:29 +04:00
|
|
|
vp = nd.ni_vp;
|
2016-10-31 18:05:05 +03:00
|
|
|
pathbuf_destroy(pb);
|
1993-03-21 12:45:37 +03:00
|
|
|
if (vp->v_type != VSOCK) {
|
|
|
|
error = ENOTSOCK;
|
|
|
|
goto bad;
|
|
|
|
}
|
2014-08-05 12:52:10 +04:00
|
|
|
if ((error = VOP_ACCESS(vp, VWRITE, l->l_cred)) != 0)
|
1993-03-21 12:45:37 +03:00
|
|
|
goto bad;
|
2008-04-24 15:38:36 +04:00
|
|
|
/* Acquire v_interlock to protect against unp_detach(). */
|
2011-06-12 07:35:36 +04:00
|
|
|
mutex_enter(vp->v_interlock);
|
1993-03-21 12:45:37 +03:00
|
|
|
so2 = vp->v_socket;
|
2008-04-24 15:38:36 +04:00
|
|
|
if (so2 == NULL) {
|
2011-06-12 07:35:36 +04:00
|
|
|
mutex_exit(vp->v_interlock);
|
1993-03-21 12:45:37 +03:00
|
|
|
error = ECONNREFUSED;
|
|
|
|
goto bad;
|
|
|
|
}
|
|
|
|
if (so->so_type != so2->so_type) {
|
2011-06-12 07:35:36 +04:00
|
|
|
mutex_exit(vp->v_interlock);
|
1993-03-21 12:45:37 +03:00
|
|
|
error = EPROTOTYPE;
|
|
|
|
goto bad;
|
|
|
|
}
|
2008-04-24 15:38:36 +04:00
|
|
|
solock(so);
|
|
|
|
unp_resetlock(so);
|
2011-06-12 07:35:36 +04:00
|
|
|
mutex_exit(vp->v_interlock);
|
2008-04-24 15:38:36 +04:00
|
|
|
if ((so->so_proto->pr_flags & PR_CONNREQUIRED) != 0) {
|
|
|
|
/*
|
|
|
|
* This may seem somewhat fragile but is OK: if we can
|
|
|
|
* see SO_ACCEPTCONN set on the endpoint, then it must
|
|
|
|
* be locked by the domain-wide uipc_lock.
|
|
|
|
*/
|
2010-10-21 15:14:39 +04:00
|
|
|
KASSERT((so2->so_options & SO_ACCEPTCONN) == 0 ||
|
2008-04-24 15:38:36 +04:00
|
|
|
so2->so_lock == uipc_lock);
|
1993-03-21 12:45:37 +03:00
|
|
|
if ((so2->so_options & SO_ACCEPTCONN) == 0 ||
|
2013-08-29 21:49:20 +04:00
|
|
|
(so3 = sonewconn(so2, false)) == NULL) {
|
1993-03-21 12:45:37 +03:00
|
|
|
error = ECONNREFUSED;
|
2008-04-24 15:38:36 +04:00
|
|
|
sounlock(so);
|
1993-03-21 12:45:37 +03:00
|
|
|
goto bad;
|
|
|
|
}
|
|
|
|
unp2 = sotounpcb(so2);
|
|
|
|
unp3 = sotounpcb(so3);
|
1997-06-24 23:12:53 +04:00
|
|
|
if (unp2->unp_addr) {
|
|
|
|
unp3->unp_addr = malloc(unp2->unp_addrlen,
|
|
|
|
M_SONAME, M_WAITOK);
|
Abolition of bcopy, ovbcopy, bcmp, and bzero, phase one.
bcopy(x, y, z) -> memcpy(y, x, z)
ovbcopy(x, y, z) -> memmove(y, x, z)
bcmp(x, y, z) -> memcmp(x, y, z)
bzero(x, y) -> memset(x, 0, y)
1998-08-04 08:03:10 +04:00
|
|
|
memcpy(unp3->unp_addr, unp2->unp_addr,
|
1997-06-24 23:12:53 +04:00
|
|
|
unp2->unp_addrlen);
|
|
|
|
unp3->unp_addrlen = unp2->unp_addrlen;
|
|
|
|
}
|
1998-01-08 01:57:09 +03:00
|
|
|
unp3->unp_flags = unp2->unp_flags;
|
2008-04-24 15:38:36 +04:00
|
|
|
so2 = so3;
|
2018-02-17 23:19:36 +03:00
|
|
|
/*
|
|
|
|
* The connector's (client's) credentials are copied from its
|
|
|
|
* process structure at the time of connect() (which is now).
|
|
|
|
*/
|
|
|
|
unp_connid(l, unp3, UNP_EIDSVALID);
|
|
|
|
/*
|
|
|
|
* The receiver's (server's) credentials are copied from the
|
|
|
|
* unp_peercred member of socket on which the former called
|
|
|
|
* listen(); unp_listen() cached that process's credentials
|
|
|
|
* at that time so we can use them now.
|
|
|
|
*/
|
|
|
|
if (unp2->unp_flags & UNP_EIDSBIND) {
|
|
|
|
memcpy(&unp->unp_connid, &unp2->unp_connid,
|
|
|
|
sizeof(unp->unp_connid));
|
|
|
|
unp->unp_flags |= UNP_EIDSVALID;
|
|
|
|
}
|
1998-07-16 04:46:50 +04:00
|
|
|
}
|
2015-02-02 05:28:26 +03:00
|
|
|
error = unp_connect1(so, so2, l);
|
2014-08-09 09:33:00 +04:00
|
|
|
if (error) {
|
|
|
|
sounlock(so);
|
|
|
|
goto bad;
|
|
|
|
}
|
|
|
|
unp2 = sotounpcb(so2);
|
|
|
|
switch (so->so_type) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* SOCK_DGRAM and default cases are handled in prior call to
|
|
|
|
* unp_connect1(), do not add a default case without fixing
|
|
|
|
* unp_connect1().
|
|
|
|
*/
|
|
|
|
|
|
|
|
case SOCK_SEQPACKET: /* FALLTHROUGH */
|
|
|
|
case SOCK_STREAM:
|
|
|
|
unp2->unp_conn = unp;
|
|
|
|
if ((unp->unp_flags | unp2->unp_flags) & UNP_CONNWAIT)
|
|
|
|
soisconnecting(so);
|
|
|
|
else
|
|
|
|
soisconnected(so);
|
|
|
|
soisconnected(so2);
|
|
|
|
/*
|
|
|
|
* If the connection is fully established, break the
|
|
|
|
* association with uipc_lock and give the connected
|
2019-06-03 09:04:20 +03:00
|
|
|
* pair a separate lock to share.
|
2014-08-09 09:33:00 +04:00
|
|
|
*/
|
|
|
|
KASSERT(so2->so_head != NULL);
|
|
|
|
unp_setpeerlocks(so, so2);
|
|
|
|
break;
|
|
|
|
|
|
|
|
}
|
2008-04-24 15:38:36 +04:00
|
|
|
sounlock(so);
|
1997-06-26 10:06:40 +04:00
|
|
|
bad:
|
1993-03-21 12:45:37 +03:00
|
|
|
vput(vp);
|
1997-06-26 10:06:40 +04:00
|
|
|
bad2:
|
|
|
|
free(sun, M_SONAME);
|
2008-04-24 15:38:36 +04:00
|
|
|
solock(so);
|
2008-03-28 15:14:22 +03:00
|
|
|
unp->unp_flags &= ~UNP_BUSY;
|
1993-03-21 12:45:37 +03:00
|
|
|
return (error);
|
|
|
|
}
|
|
|
|
|
1993-06-27 10:01:27 +04:00
|
|
|
int
|
2014-08-09 09:33:00 +04:00
|
|
|
unp_connect2(struct socket *so, struct socket *so2)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
2000-03-30 13:27:11 +04:00
|
|
|
struct unpcb *unp = sotounpcb(so);
|
|
|
|
struct unpcb *unp2;
|
2014-08-09 09:33:00 +04:00
|
|
|
int error = 0;
|
1993-03-21 12:45:37 +03:00
|
|
|
|
2008-04-24 15:38:36 +04:00
|
|
|
KASSERT(solocked2(so, so2));
|
2014-08-09 09:33:00 +04:00
|
|
|
|
2015-02-02 05:28:26 +03:00
|
|
|
error = unp_connect1(so, so2, curlwp);
|
2014-08-09 09:33:00 +04:00
|
|
|
if (error)
|
|
|
|
return error;
|
2008-04-24 15:38:36 +04:00
|
|
|
|
1993-03-21 12:45:37 +03:00
|
|
|
unp2 = sotounpcb(so2);
|
|
|
|
switch (so->so_type) {
|
|
|
|
|
2014-08-09 09:33:00 +04:00
|
|
|
/*
|
|
|
|
* SOCK_DGRAM and default cases are handled in prior call to
|
|
|
|
* unp_connect1(), do not add a default case without fixing
|
|
|
|
* unp_connect1().
|
|
|
|
*/
|
1993-03-21 12:45:37 +03:00
|
|
|
|
2011-05-29 07:32:46 +04:00
|
|
|
case SOCK_SEQPACKET: /* FALLTHROUGH */
|
1993-03-21 12:45:37 +03:00
|
|
|
case SOCK_STREAM:
|
|
|
|
unp2->unp_conn = unp;
|
2014-08-09 09:33:00 +04:00
|
|
|
soisconnected(so);
|
1993-03-21 12:45:37 +03:00
|
|
|
soisconnected(so2);
|
|
|
|
break;
|
|
|
|
|
|
|
|
}
|
2014-08-09 09:33:00 +04:00
|
|
|
return error;
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
|
2014-07-31 18:12:57 +04:00
|
|
|
static void
|
split PRU_DISCONNECT, PRU_SHUTDOWN and PRU_ABORT function out of
pr_generic() usrreq switches and put into separate functions
xxx_disconnect(struct socket *)
xxx_shutdown(struct socket *)
xxx_abort(struct socket *)
- always KASSERT(solocked(so)) even if not implemented
- replace calls to pr_generic() with req =
PRU_{DISCONNECT,SHUTDOWN,ABORT}
with calls to pr_{disconnect,shutdown,abort}() respectively
rename existing internal functions used to implement above functionality
to permit use of the names for xxx_{disconnect,shutdown,abort}().
- {l2cap,sco,rfcomm}_disconnect() ->
{l2cap,sco,rfcomm}_disconnect_pcb()
- {unp,rip,tcp}_disconnect() -> {unp,rip,tcp}_disconnect1()
- unp_shutdown() -> unp_shutdown1()
patch reviewed by rmind
2014-07-31 07:39:35 +04:00
|
|
|
unp_disconnect1(struct unpcb *unp)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
2000-03-30 13:27:11 +04:00
|
|
|
struct unpcb *unp2 = unp->unp_conn;
|
2008-04-24 15:38:36 +04:00
|
|
|
struct socket *so;
|
1993-03-21 12:45:37 +03:00
|
|
|
|
|
|
|
if (unp2 == 0)
|
|
|
|
return;
|
|
|
|
unp->unp_conn = 0;
|
2008-04-24 15:38:36 +04:00
|
|
|
so = unp->unp_socket;
|
|
|
|
switch (so->so_type) {
|
1993-03-21 12:45:37 +03:00
|
|
|
case SOCK_DGRAM:
|
|
|
|
if (unp2->unp_refs == unp)
|
|
|
|
unp2->unp_refs = unp->unp_nextref;
|
|
|
|
else {
|
|
|
|
unp2 = unp2->unp_refs;
|
|
|
|
for (;;) {
|
2008-04-24 15:38:36 +04:00
|
|
|
KASSERT(solocked2(so, unp2->unp_socket));
|
1993-03-21 12:45:37 +03:00
|
|
|
if (unp2 == 0)
|
split PRU_DISCONNECT, PRU_SHUTDOWN and PRU_ABORT function out of
pr_generic() usrreq switches and put into separate functions
xxx_disconnect(struct socket *)
xxx_shutdown(struct socket *)
xxx_abort(struct socket *)
- always KASSERT(solocked(so)) even if not implemented
- replace calls to pr_generic() with req =
PRU_{DISCONNECT,SHUTDOWN,ABORT}
with calls to pr_{disconnect,shutdown,abort}() respectively
rename existing internal functions used to implement above functionality
to permit use of the names for xxx_{disconnect,shutdown,abort}().
- {l2cap,sco,rfcomm}_disconnect() ->
{l2cap,sco,rfcomm}_disconnect_pcb()
- {unp,rip,tcp}_disconnect() -> {unp,rip,tcp}_disconnect1()
- unp_shutdown() -> unp_shutdown1()
patch reviewed by rmind
2014-07-31 07:39:35 +04:00
|
|
|
panic("unp_disconnect1");
|
1993-03-21 12:45:37 +03:00
|
|
|
if (unp2->unp_nextref == unp)
|
|
|
|
break;
|
|
|
|
unp2 = unp2->unp_nextref;
|
|
|
|
}
|
|
|
|
unp2->unp_nextref = unp->unp_nextref;
|
|
|
|
}
|
|
|
|
unp->unp_nextref = 0;
|
2008-04-24 15:38:36 +04:00
|
|
|
so->so_state &= ~SS_ISCONNECTED;
|
1993-03-21 12:45:37 +03:00
|
|
|
break;
|
|
|
|
|
2011-05-29 07:32:46 +04:00
|
|
|
case SOCK_SEQPACKET: /* FALLTHROUGH */
|
1993-03-21 12:45:37 +03:00
|
|
|
case SOCK_STREAM:
|
2008-04-24 15:38:36 +04:00
|
|
|
KASSERT(solocked2(so, unp2->unp_socket));
|
|
|
|
soisdisconnected(so);
|
1993-03-21 12:45:37 +03:00
|
|
|
unp2->unp_conn = 0;
|
|
|
|
soisdisconnected(unp2->unp_socket);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-07-31 18:12:57 +04:00
|
|
|
static void
|
split PRU_DISCONNECT, PRU_SHUTDOWN and PRU_ABORT function out of
pr_generic() usrreq switches and put into separate functions
xxx_disconnect(struct socket *)
xxx_shutdown(struct socket *)
xxx_abort(struct socket *)
- always KASSERT(solocked(so)) even if not implemented
- replace calls to pr_generic() with req =
PRU_{DISCONNECT,SHUTDOWN,ABORT}
with calls to pr_{disconnect,shutdown,abort}() respectively
rename existing internal functions used to implement above functionality
to permit use of the names for xxx_{disconnect,shutdown,abort}().
- {l2cap,sco,rfcomm}_disconnect() ->
{l2cap,sco,rfcomm}_disconnect_pcb()
- {unp,rip,tcp}_disconnect() -> {unp,rip,tcp}_disconnect1()
- unp_shutdown() -> unp_shutdown1()
patch reviewed by rmind
2014-07-31 07:39:35 +04:00
|
|
|
unp_shutdown1(struct unpcb *unp)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
|
|
|
struct socket *so;
|
|
|
|
|
2011-05-29 07:32:46 +04:00
|
|
|
switch(unp->unp_socket->so_type) {
|
|
|
|
case SOCK_SEQPACKET: /* FALLTHROUGH */
|
|
|
|
case SOCK_STREAM:
|
|
|
|
if (unp->unp_conn && (so = unp->unp_conn->unp_socket))
|
|
|
|
socantrcvmore(so);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
|
2014-07-31 18:12:57 +04:00
|
|
|
static bool
|
2004-04-19 01:48:15 +04:00
|
|
|
unp_drop(struct unpcb *unp, int errno)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
|
|
|
struct socket *so = unp->unp_socket;
|
|
|
|
|
2008-04-24 15:38:36 +04:00
|
|
|
KASSERT(solocked(so));
|
|
|
|
|
1993-03-21 12:45:37 +03:00
|
|
|
so->so_error = errno;
|
split PRU_DISCONNECT, PRU_SHUTDOWN and PRU_ABORT function out of
pr_generic() usrreq switches and put into separate functions
xxx_disconnect(struct socket *)
xxx_shutdown(struct socket *)
xxx_abort(struct socket *)
- always KASSERT(solocked(so)) even if not implemented
- replace calls to pr_generic() with req =
PRU_{DISCONNECT,SHUTDOWN,ABORT}
with calls to pr_{disconnect,shutdown,abort}() respectively
rename existing internal functions used to implement above functionality
to permit use of the names for xxx_{disconnect,shutdown,abort}().
- {l2cap,sco,rfcomm}_disconnect() ->
{l2cap,sco,rfcomm}_disconnect_pcb()
- {unp,rip,tcp}_disconnect() -> {unp,rip,tcp}_disconnect1()
- unp_shutdown() -> unp_shutdown1()
patch reviewed by rmind
2014-07-31 07:39:35 +04:00
|
|
|
unp_disconnect1(unp);
|
1993-03-21 12:45:37 +03:00
|
|
|
if (so->so_head) {
|
2008-04-24 15:38:36 +04:00
|
|
|
so->so_pcb = NULL;
|
|
|
|
/* sofree() drops the socket lock */
|
1993-03-21 12:45:37 +03:00
|
|
|
sofree(so);
|
2008-04-24 15:38:36 +04:00
|
|
|
unp_free(unp);
|
|
|
|
return true;
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
2008-04-24 15:38:36 +04:00
|
|
|
return false;
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef notdef
|
2004-04-19 01:48:15 +04:00
|
|
|
unp_drain(void)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
|
|
|
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
1993-06-27 10:01:27 +04:00
|
|
|
int
|
2011-06-26 20:42:39 +04:00
|
|
|
unp_externalize(struct mbuf *rights, struct lwp *l, int flags)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
2012-07-30 14:42:24 +04:00
|
|
|
struct cmsghdr * const cm = mtod(rights, struct cmsghdr *);
|
|
|
|
struct proc * const p = l->l_proc;
|
2008-03-22 00:54:58 +03:00
|
|
|
file_t **rp;
|
2012-07-30 14:42:24 +04:00
|
|
|
int error = 0;
|
2000-06-05 10:06:07 +04:00
|
|
|
|
2012-07-30 14:42:24 +04:00
|
|
|
const size_t nfds = (cm->cmsg_len - CMSG_ALIGN(sizeof(*cm))) /
|
2008-03-22 00:54:58 +03:00
|
|
|
sizeof(file_t *);
|
2013-08-01 23:33:21 +04:00
|
|
|
if (nfds == 0)
|
|
|
|
goto noop;
|
1993-03-21 12:45:37 +03:00
|
|
|
|
2012-07-30 14:42:24 +04:00
|
|
|
int * const fdp = kmem_alloc(nfds * sizeof(int), KM_SLEEP);
|
2020-04-22 00:42:47 +03:00
|
|
|
rw_enter(&p->p_cwdi->cwdi_lock, RW_READER);
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
|
2009-03-11 09:05:29 +03:00
|
|
|
/* Make sure the recipient should be able to see the files.. */
|
2012-10-07 02:58:08 +04:00
|
|
|
rp = (file_t **)CMSG_DATA(cm);
|
|
|
|
for (size_t i = 0; i < nfds; i++) {
|
|
|
|
file_t * const fp = *rp++;
|
|
|
|
if (fp == NULL) {
|
|
|
|
error = EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* If we are in a chroot'ed directory, and
|
|
|
|
* someone wants to pass us a directory, make
|
|
|
|
* sure it's inside the subtree we're allowed
|
|
|
|
* to access.
|
|
|
|
*/
|
2020-04-22 00:42:47 +03:00
|
|
|
if (p->p_cwdi->cwdi_rdir != NULL && fp->f_type == DTYPE_VNODE) {
|
2014-09-05 13:20:59 +04:00
|
|
|
vnode_t *vp = fp->f_vnode;
|
2020-04-22 00:42:47 +03:00
|
|
|
if ((vp->v_type == VDIR) &&
|
|
|
|
!vn_isunder(vp, p->p_cwdi->cwdi_rdir, l)) {
|
2012-10-07 02:58:08 +04:00
|
|
|
error = EPERM;
|
|
|
|
goto out;
|
1999-03-22 20:54:38 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2020-04-22 00:42:47 +03:00
|
|
|
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
restart:
|
1997-04-10 05:51:21 +04:00
|
|
|
/*
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
* First loop -- allocate file descriptor table slots for the
|
2009-03-11 09:05:29 +03:00
|
|
|
* new files.
|
1997-04-10 05:51:21 +04:00
|
|
|
*/
|
2012-07-30 14:42:24 +04:00
|
|
|
for (size_t i = 0; i < nfds; i++) {
|
2008-03-22 00:54:58 +03:00
|
|
|
if ((error = fd_alloc(p, 0, &fdp[i])) != 0) {
|
2001-06-06 21:00:00 +04:00
|
|
|
/*
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
* Back out what we've done so far.
|
2001-06-06 21:00:00 +04:00
|
|
|
*/
|
2012-07-30 14:42:24 +04:00
|
|
|
while (i-- > 0) {
|
2008-03-22 00:54:58 +03:00
|
|
|
fd_abort(p, NULL, fdp[i]);
|
|
|
|
}
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
if (error == ENOSPC) {
|
2008-03-22 00:54:58 +03:00
|
|
|
fd_tryexpand(p);
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
error = 0;
|
2012-07-30 14:42:24 +04:00
|
|
|
goto restart;
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
}
|
2012-07-30 14:42:24 +04:00
|
|
|
/*
|
|
|
|
* This is the error that has historically
|
|
|
|
* been returned, and some callers may
|
|
|
|
* expect it.
|
|
|
|
*/
|
|
|
|
error = EMSGSIZE;
|
|
|
|
goto out;
|
2001-06-06 21:00:00 +04:00
|
|
|
}
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
1997-04-10 05:51:21 +04:00
|
|
|
|
|
|
|
/*
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
* Now that adding them has succeeded, update all of the
|
2009-03-11 09:05:29 +03:00
|
|
|
* file passing state and affix the descriptors.
|
2008-04-24 15:38:36 +04:00
|
|
|
*/
|
2008-03-22 00:54:58 +03:00
|
|
|
rp = (file_t **)CMSG_DATA(cm);
|
2012-07-30 14:42:24 +04:00
|
|
|
int *ofdp = (int *)CMSG_DATA(cm);
|
|
|
|
for (size_t i = 0; i < nfds; i++) {
|
|
|
|
file_t * const fp = *rp++;
|
|
|
|
const int fd = fdp[i];
|
2008-03-22 00:54:58 +03:00
|
|
|
atomic_dec_uint(&unp_rights);
|
2011-06-26 20:42:39 +04:00
|
|
|
fd_set_exclose(l, fd, (flags & O_CLOEXEC) != 0);
|
|
|
|
fd_affix(p, fp, fd);
|
2012-07-30 14:42:24 +04:00
|
|
|
/*
|
|
|
|
* Done with this file pointer, replace it with a fd;
|
|
|
|
*/
|
|
|
|
*ofdp++ = fd;
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_enter(&fp->f_lock);
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
fp->f_msgcount--;
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_exit(&fp->f_lock);
|
|
|
|
/*
|
|
|
|
* Note that fd_affix() adds a reference to the file.
|
|
|
|
* The file may already have been closed by another
|
|
|
|
* LWP in the process, so we must drop the reference
|
|
|
|
* added by unp_internalize() with closef().
|
|
|
|
*/
|
|
|
|
closef(fp);
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2012-07-30 14:42:24 +04:00
|
|
|
* Adjust length, in case of transition from large file_t
|
|
|
|
* pointers to ints.
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
*/
|
2012-07-30 14:42:24 +04:00
|
|
|
if (sizeof(file_t *) != sizeof(int)) {
|
|
|
|
cm->cmsg_len = CMSG_LEN(nfds * sizeof(int));
|
|
|
|
rights->m_len = CMSG_SPACE(nfds * sizeof(int));
|
|
|
|
}
|
Rework fdalloc() even further: split fdalloc() into fdalloc() and
fdexpand(). The former will return ENOSPC if there is not space
in the current filedesc table. The latter performs the expansion
of the filedesc table. This means that fdalloc() won't ever block,
and it gives callers an opportunity to clean up before the
potentially-blocking fdexpand() call.
Update all fdalloc() callers to deal with the need-to-fdexpand() case.
Rewrite unp_externalize() to use fdalloc() and fdexpand() in a
safe way, using an algorithm suggested by Bill Sommerfeld:
- Use a temporary array of integers to hold the new filedesc table
indexes. This allows us to repeat the loop if necessary.
- Loop through the array of file *'s, assigning them to filedesc table
slots. If fdalloc() indicates expansion is necessary, undo the
assignments we've done so far, expand, and retry the whole process.
- Once all file *'s have been assigned to slots, update the f_msgcount
and unp_rights counters.
- Right before we return, copy the temporary integer array to the message
buffer, and trim the length as before.
Note that once locking is added to the filedesc array, this entire
operation will be `atomic', in that the lock will be held while
file *'s are assigned to embryonic table slots, thus preventing anything
else from using them.
2001-06-07 05:29:16 +04:00
|
|
|
out:
|
2012-07-30 14:42:24 +04:00
|
|
|
if (__predict_false(error != 0)) {
|
2013-02-14 05:00:07 +04:00
|
|
|
file_t **const fpp = (file_t **)CMSG_DATA(cm);
|
|
|
|
for (size_t i = 0; i < nfds; i++)
|
|
|
|
unp_discard_now(fpp[i]);
|
|
|
|
/*
|
|
|
|
* Truncate the array so that nobody will try to interpret
|
|
|
|
* what is now garbage in it.
|
|
|
|
*/
|
|
|
|
cm->cmsg_len = CMSG_LEN(0);
|
|
|
|
rights->m_len = CMSG_SPACE(0);
|
2012-07-30 14:42:24 +04:00
|
|
|
}
|
2020-04-22 00:42:47 +03:00
|
|
|
rw_exit(&p->p_cwdi->cwdi_lock);
|
2013-08-01 23:33:21 +04:00
|
|
|
kmem_free(fdp, nfds * sizeof(int));
|
2012-07-30 14:42:24 +04:00
|
|
|
|
2013-08-01 23:33:21 +04:00
|
|
|
noop:
|
2013-02-14 05:00:07 +04:00
|
|
|
/*
|
|
|
|
* Don't disclose kernel memory in the alignment space.
|
|
|
|
*/
|
|
|
|
KASSERT(cm->cmsg_len <= rights->m_len);
|
|
|
|
memset(&mtod(rights, char *)[cm->cmsg_len], 0, rights->m_len -
|
|
|
|
cm->cmsg_len);
|
2012-07-30 14:45:03 +04:00
|
|
|
return error;
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
|
2014-07-31 18:12:57 +04:00
|
|
|
static int
|
2008-04-24 15:38:36 +04:00
|
|
|
unp_internalize(struct mbuf **controlp)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
2009-03-11 09:05:29 +03:00
|
|
|
filedesc_t *fdescp = curlwp->l_fd;
|
2020-02-01 05:23:03 +03:00
|
|
|
fdtab_t *dt;
|
2008-03-24 15:24:37 +03:00
|
|
|
struct mbuf *control = *controlp;
|
2003-12-30 01:08:02 +03:00
|
|
|
struct cmsghdr *newcm, *cm = mtod(control, struct cmsghdr *);
|
2008-03-22 00:54:58 +03:00
|
|
|
file_t **rp, **files;
|
|
|
|
file_t *fp;
|
2000-03-30 13:27:11 +04:00
|
|
|
int i, fd, *fdp;
|
2008-03-22 00:54:58 +03:00
|
|
|
int nfds, error;
|
2009-03-11 09:05:29 +03:00
|
|
|
u_int maxmsg;
|
2008-03-22 00:54:58 +03:00
|
|
|
|
|
|
|
error = 0;
|
|
|
|
newcm = NULL;
|
1993-03-21 12:45:37 +03:00
|
|
|
|
2008-03-22 00:54:58 +03:00
|
|
|
/* Sanity check the control message header. */
|
2003-07-24 11:30:48 +04:00
|
|
|
if (cm->cmsg_type != SCM_RIGHTS || cm->cmsg_level != SOL_SOCKET ||
|
2008-06-20 19:27:50 +04:00
|
|
|
cm->cmsg_len > control->m_len ||
|
|
|
|
cm->cmsg_len < CMSG_ALIGN(sizeof(*cm)))
|
1993-03-21 12:45:37 +03:00
|
|
|
return (EINVAL);
|
1997-04-10 05:51:21 +04:00
|
|
|
|
2008-03-22 00:54:58 +03:00
|
|
|
/*
|
|
|
|
* Verify that the file descriptors are valid, and acquire
|
|
|
|
* a reference to each.
|
|
|
|
*/
|
2000-06-05 10:06:07 +04:00
|
|
|
nfds = (cm->cmsg_len - CMSG_ALIGN(sizeof(*cm))) / sizeof(int);
|
|
|
|
fdp = (int *)CMSG_DATA(cm);
|
2009-03-11 09:05:29 +03:00
|
|
|
maxmsg = maxfiles / unp_rights_ratio;
|
1997-04-10 05:51:21 +04:00
|
|
|
for (i = 0; i < nfds; i++) {
|
|
|
|
fd = *fdp++;
|
2009-03-11 09:05:29 +03:00
|
|
|
if (atomic_inc_uint_nv(&unp_rights) > maxmsg) {
|
|
|
|
atomic_dec_uint(&unp_rights);
|
|
|
|
nfds = i;
|
|
|
|
error = EAGAIN;
|
|
|
|
goto out;
|
|
|
|
}
|
2012-06-02 20:16:16 +04:00
|
|
|
if ((fp = fd_getfile(fd)) == NULL
|
|
|
|
|| fp->f_type == DTYPE_KQUEUE) {
|
|
|
|
if (fp)
|
|
|
|
fd_putfile(fd);
|
2009-03-11 09:05:29 +03:00
|
|
|
atomic_dec_uint(&unp_rights);
|
2009-02-08 19:38:12 +03:00
|
|
|
nfds = i;
|
2008-03-22 00:54:58 +03:00
|
|
|
error = EBADF;
|
|
|
|
goto out;
|
|
|
|
}
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
1997-04-10 05:51:21 +04:00
|
|
|
|
2008-03-22 00:54:58 +03:00
|
|
|
/* Allocate new space and copy header into it. */
|
|
|
|
newcm = malloc(CMSG_SPACE(nfds * sizeof(file_t *)), M_MBUF, M_WAITOK);
|
|
|
|
if (newcm == NULL) {
|
|
|
|
error = E2BIG;
|
|
|
|
goto out;
|
1997-04-10 05:51:21 +04:00
|
|
|
}
|
2008-03-22 00:54:58 +03:00
|
|
|
memcpy(newcm, cm, sizeof(struct cmsghdr));
|
2019-07-29 12:42:17 +03:00
|
|
|
memset(newcm + 1, 0, CMSG_LEN(0) - sizeof(struct cmsghdr));
|
2008-03-22 00:54:58 +03:00
|
|
|
files = (file_t **)CMSG_DATA(newcm);
|
1997-04-10 05:51:21 +04:00
|
|
|
|
|
|
|
/*
|
2008-03-22 00:54:58 +03:00
|
|
|
* Transform the file descriptors into file_t pointers, in
|
1997-04-10 05:51:21 +04:00
|
|
|
* reverse order so that if pointers are bigger than ints, the
|
2008-03-22 00:54:58 +03:00
|
|
|
* int won't get until we're done. No need to lock, as we have
|
|
|
|
* already validated the descriptors with fd_getfile().
|
1997-04-10 05:51:21 +04:00
|
|
|
*/
|
2006-11-01 14:37:59 +03:00
|
|
|
fdp = (int *)CMSG_DATA(cm) + nfds;
|
|
|
|
rp = files + nfds;
|
1997-04-10 05:51:21 +04:00
|
|
|
for (i = 0; i < nfds; i++) {
|
2020-02-01 05:23:03 +03:00
|
|
|
dt = atomic_load_consume(&fdescp->fd_dt);
|
2020-02-01 05:23:23 +03:00
|
|
|
fp = atomic_load_consume(&dt->dt_ff[*--fdp]->ff_file);
|
2008-03-22 00:54:58 +03:00
|
|
|
KASSERT(fp != NULL);
|
|
|
|
mutex_enter(&fp->f_lock);
|
2006-11-01 14:37:59 +03:00
|
|
|
*--rp = fp;
|
1993-03-21 12:45:37 +03:00
|
|
|
fp->f_count++;
|
|
|
|
fp->f_msgcount++;
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_exit(&fp->f_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
/* Release descriptor references. */
|
|
|
|
fdp = (int *)CMSG_DATA(cm);
|
|
|
|
for (i = 0; i < nfds; i++) {
|
|
|
|
fd_putfile(*fdp++);
|
2009-03-11 09:05:29 +03:00
|
|
|
if (error != 0) {
|
|
|
|
atomic_dec_uint(&unp_rights);
|
|
|
|
}
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
2003-12-30 01:08:02 +03:00
|
|
|
|
2008-03-22 00:54:58 +03:00
|
|
|
if (error == 0) {
|
2008-03-24 15:24:37 +03:00
|
|
|
if (control->m_flags & M_EXT) {
|
|
|
|
m_freem(control);
|
|
|
|
*controlp = control = m_get(M_WAIT, MT_CONTROL);
|
|
|
|
}
|
2008-03-22 00:54:58 +03:00
|
|
|
MEXTADD(control, newcm, CMSG_SPACE(nfds * sizeof(file_t *)),
|
2003-12-30 01:08:02 +03:00
|
|
|
M_MBUF, NULL, NULL);
|
|
|
|
cm = newcm;
|
2008-03-22 00:54:58 +03:00
|
|
|
/*
|
|
|
|
* Adjust message & mbuf to note amount of space
|
|
|
|
* actually used.
|
|
|
|
*/
|
|
|
|
cm->cmsg_len = CMSG_LEN(nfds * sizeof(file_t *));
|
|
|
|
control->m_len = CMSG_SPACE(nfds * sizeof(file_t *));
|
2003-12-30 01:08:02 +03:00
|
|
|
}
|
|
|
|
|
2008-03-22 00:54:58 +03:00
|
|
|
return error;
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
|
1998-01-08 01:57:09 +03:00
|
|
|
struct mbuf *
|
2006-07-24 02:06:03 +04:00
|
|
|
unp_addsockcred(struct lwp *l, struct mbuf *control)
|
1998-01-08 01:57:09 +03:00
|
|
|
{
|
|
|
|
struct sockcred *sc;
|
2013-06-27 22:54:31 +04:00
|
|
|
struct mbuf *m;
|
|
|
|
void *p;
|
|
|
|
|
|
|
|
m = sbcreatecontrol1(&p, SOCKCREDSIZE(kauth_cred_ngroups(l->l_cred)),
|
|
|
|
SCM_CREDS, SOL_SOCKET, M_WAITOK);
|
|
|
|
if (m == NULL)
|
|
|
|
return control;
|
2016-04-06 22:45:45 +03:00
|
|
|
|
2013-06-27 22:54:31 +04:00
|
|
|
sc = p;
|
2016-04-06 22:45:45 +03:00
|
|
|
sc->sc_pid = l->l_proc->p_pid;
|
2006-07-24 02:06:03 +04:00
|
|
|
sc->sc_uid = kauth_cred_getuid(l->l_cred);
|
|
|
|
sc->sc_euid = kauth_cred_geteuid(l->l_cred);
|
|
|
|
sc->sc_gid = kauth_cred_getgid(l->l_cred);
|
|
|
|
sc->sc_egid = kauth_cred_getegid(l->l_cred);
|
|
|
|
sc->sc_ngroups = kauth_cred_ngroups(l->l_cred);
|
1998-01-08 01:57:09 +03:00
|
|
|
|
2013-06-27 22:54:31 +04:00
|
|
|
for (int i = 0; i < sc->sc_ngroups; i++)
|
|
|
|
sc->sc_groups[i] = kauth_cred_group(l->l_cred, i);
|
1998-01-08 01:57:09 +03:00
|
|
|
|
2013-06-27 22:54:31 +04:00
|
|
|
return m_add(control, m);
|
1998-01-08 01:57:09 +03:00
|
|
|
}
|
|
|
|
|
1999-03-22 20:54:38 +03:00
|
|
|
/*
|
2009-03-11 09:05:29 +03:00
|
|
|
* Do a mark-sweep GC of files in the system, to free up any which are
|
|
|
|
* caught in flight to an about-to-be-closed socket. Additionally,
|
|
|
|
* process deferred file closures.
|
1999-03-22 20:54:38 +03:00
|
|
|
*/
|
2009-03-11 09:05:29 +03:00
|
|
|
static void
|
|
|
|
unp_gc(file_t *dp)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
2009-03-11 09:05:29 +03:00
|
|
|
extern struct domain unixdomain;
|
|
|
|
file_t *fp, *np;
|
2000-03-30 13:27:11 +04:00
|
|
|
struct socket *so, *so1;
|
2014-09-05 09:57:21 +04:00
|
|
|
u_int i, oflags, rflags;
|
2009-03-11 09:05:29 +03:00
|
|
|
bool didwork;
|
1993-03-21 12:45:37 +03:00
|
|
|
|
2009-03-11 09:05:29 +03:00
|
|
|
KASSERT(curlwp == unp_thread_lwp);
|
|
|
|
KASSERT(mutex_owned(&filelist_lock));
|
2008-03-22 00:54:58 +03:00
|
|
|
|
2009-03-11 09:05:29 +03:00
|
|
|
/*
|
|
|
|
* First, process deferred file closures.
|
|
|
|
*/
|
|
|
|
while (!SLIST_EMPTY(&unp_thread_discard)) {
|
|
|
|
fp = SLIST_FIRST(&unp_thread_discard);
|
|
|
|
KASSERT(fp->f_unpcount > 0);
|
|
|
|
KASSERT(fp->f_count > 0);
|
|
|
|
KASSERT(fp->f_msgcount > 0);
|
|
|
|
KASSERT(fp->f_count >= fp->f_unpcount);
|
|
|
|
KASSERT(fp->f_count >= fp->f_msgcount);
|
|
|
|
KASSERT(fp->f_msgcount >= fp->f_unpcount);
|
|
|
|
SLIST_REMOVE_HEAD(&unp_thread_discard, f_unplist);
|
|
|
|
i = fp->f_unpcount;
|
|
|
|
fp->f_unpcount = 0;
|
|
|
|
mutex_exit(&filelist_lock);
|
|
|
|
for (; i != 0; i--) {
|
|
|
|
unp_discard_now(fp);
|
|
|
|
}
|
|
|
|
mutex_enter(&filelist_lock);
|
|
|
|
}
|
1999-03-22 20:54:38 +03:00
|
|
|
|
2009-03-11 09:05:29 +03:00
|
|
|
/*
|
|
|
|
* Clear mark bits. Ensure that we don't consider new files
|
|
|
|
* entering the file table during this loop (they will not have
|
|
|
|
* FSCAN set).
|
|
|
|
*/
|
2008-03-22 00:54:58 +03:00
|
|
|
unp_defer = 0;
|
|
|
|
LIST_FOREACH(fp, &filehead, f_list) {
|
2014-09-05 09:57:21 +04:00
|
|
|
for (oflags = fp->f_flag;; oflags = rflags) {
|
|
|
|
rflags = atomic_cas_uint(&fp->f_flag, oflags,
|
|
|
|
(oflags | FSCAN) & ~(FMARK|FDEFER));
|
|
|
|
if (__predict_true(oflags == rflags)) {
|
2009-03-11 09:05:29 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2008-03-22 00:54:58 +03:00
|
|
|
}
|
1999-03-22 20:54:38 +03:00
|
|
|
|
|
|
|
/*
|
2009-03-11 09:05:29 +03:00
|
|
|
* Iterate over the set of sockets, marking ones believed (based on
|
|
|
|
* refcount) to be referenced from a process, and marking for rescan
|
|
|
|
* sockets which are queued on a socket. Recan continues descending
|
|
|
|
* and searching for sockets referenced by sockets (FDEFER), until
|
|
|
|
* there are no more socket->socket references to be discovered.
|
1999-03-22 20:54:38 +03:00
|
|
|
*/
|
1993-03-21 12:45:37 +03:00
|
|
|
do {
|
2009-03-11 09:05:29 +03:00
|
|
|
didwork = false;
|
|
|
|
for (fp = LIST_FIRST(&filehead); fp != NULL; fp = np) {
|
|
|
|
KASSERT(mutex_owned(&filelist_lock));
|
|
|
|
np = LIST_NEXT(fp, f_list);
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_enter(&fp->f_lock);
|
2009-03-11 09:05:29 +03:00
|
|
|
if ((fp->f_flag & FDEFER) != 0) {
|
2008-03-22 00:54:58 +03:00
|
|
|
atomic_and_uint(&fp->f_flag, ~FDEFER);
|
1993-03-21 12:45:37 +03:00
|
|
|
unp_defer--;
|
2015-03-01 04:14:41 +03:00
|
|
|
if (fp->f_count == 0) {
|
|
|
|
/*
|
|
|
|
* XXX: closef() doesn't pay attention
|
|
|
|
* to FDEFER
|
|
|
|
*/
|
|
|
|
mutex_exit(&fp->f_lock);
|
|
|
|
continue;
|
|
|
|
}
|
1993-03-21 12:45:37 +03:00
|
|
|
} else {
|
2007-10-08 19:12:05 +04:00
|
|
|
if (fp->f_count == 0 ||
|
2009-03-11 09:05:29 +03:00
|
|
|
(fp->f_flag & FMARK) != 0 ||
|
|
|
|
fp->f_count == fp->f_msgcount ||
|
|
|
|
fp->f_unpcount != 0) {
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_exit(&fp->f_lock);
|
1993-03-21 12:45:37 +03:00
|
|
|
continue;
|
2007-10-08 19:12:05 +04:00
|
|
|
}
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
2008-03-22 00:54:58 +03:00
|
|
|
atomic_or_uint(&fp->f_flag, FMARK);
|
1999-03-22 20:54:38 +03:00
|
|
|
|
1993-03-21 12:45:37 +03:00
|
|
|
if (fp->f_type != DTYPE_SOCKET ||
|
2014-09-05 13:20:59 +04:00
|
|
|
(so = fp->f_socket) == NULL ||
|
2007-10-08 19:12:05 +04:00
|
|
|
so->so_proto->pr_domain != &unixdomain ||
|
2009-03-11 09:05:29 +03:00
|
|
|
(so->so_proto->pr_flags & PR_RIGHTS) == 0) {
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_exit(&fp->f_lock);
|
1993-03-21 12:45:37 +03:00
|
|
|
continue;
|
2007-10-08 19:12:05 +04:00
|
|
|
}
|
2009-03-11 09:05:29 +03:00
|
|
|
|
|
|
|
/* Gain file ref, mark our position, and unlock. */
|
|
|
|
didwork = true;
|
|
|
|
LIST_INSERT_AFTER(fp, dp, f_list);
|
|
|
|
fp->f_count++;
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_exit(&fp->f_lock);
|
2009-03-11 09:05:29 +03:00
|
|
|
mutex_exit(&filelist_lock);
|
2007-10-08 19:12:05 +04:00
|
|
|
|
2008-04-24 15:38:36 +04:00
|
|
|
/*
|
2009-03-11 09:05:29 +03:00
|
|
|
* Mark files referenced from sockets queued on the
|
|
|
|
* accept queue as well.
|
2008-04-24 15:38:36 +04:00
|
|
|
*/
|
|
|
|
solock(so);
|
1999-03-22 20:54:38 +03:00
|
|
|
unp_scan(so->so_rcv.sb_mb, unp_mark, 0);
|
2009-03-11 09:05:29 +03:00
|
|
|
if ((so->so_options & SO_ACCEPTCONN) != 0) {
|
2002-09-04 05:32:31 +04:00
|
|
|
TAILQ_FOREACH(so1, &so->so_q0, so_qe) {
|
1999-03-22 20:54:38 +03:00
|
|
|
unp_scan(so1->so_rcv.sb_mb, unp_mark, 0);
|
|
|
|
}
|
2002-09-04 05:32:31 +04:00
|
|
|
TAILQ_FOREACH(so1, &so->so_q, so_qe) {
|
1999-03-22 20:54:38 +03:00
|
|
|
unp_scan(so1->so_rcv.sb_mb, unp_mark, 0);
|
|
|
|
}
|
|
|
|
}
|
2008-04-24 15:38:36 +04:00
|
|
|
sounlock(so);
|
2009-03-11 09:05:29 +03:00
|
|
|
|
|
|
|
/* Re-lock and restart from where we left off. */
|
|
|
|
closef(fp);
|
|
|
|
mutex_enter(&filelist_lock);
|
|
|
|
np = LIST_NEXT(dp, f_list);
|
|
|
|
LIST_REMOVE(dp, f_list);
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
2009-03-11 09:05:29 +03:00
|
|
|
/*
|
|
|
|
* Bail early if we did nothing in the loop above. Could
|
|
|
|
* happen because of concurrent activity causing unp_defer
|
|
|
|
* to get out of sync.
|
|
|
|
*/
|
|
|
|
} while (unp_defer != 0 && didwork);
|
2007-10-08 19:12:05 +04:00
|
|
|
|
1994-05-04 13:50:11 +04:00
|
|
|
/*
|
2009-03-11 09:05:29 +03:00
|
|
|
* Sweep pass.
|
1994-05-04 13:50:11 +04:00
|
|
|
*
|
2009-03-11 09:05:29 +03:00
|
|
|
* We grab an extra reference to each of the files that are
|
|
|
|
* not otherwise accessible and then free the rights that are
|
|
|
|
* stored in messages on them.
|
1994-05-04 13:50:11 +04:00
|
|
|
*/
|
2009-03-11 09:05:29 +03:00
|
|
|
for (fp = LIST_FIRST(&filehead); fp != NULL; fp = np) {
|
|
|
|
KASSERT(mutex_owned(&filelist_lock));
|
|
|
|
np = LIST_NEXT(fp, f_list);
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_enter(&fp->f_lock);
|
2009-03-11 09:05:29 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Ignore non-sockets.
|
|
|
|
* Ignore dead sockets, or sockets with pending close.
|
|
|
|
* Ignore sockets obviously referenced elsewhere.
|
|
|
|
* Ignore sockets marked as referenced by our scan.
|
|
|
|
* Ignore new sockets that did not exist during the scan.
|
|
|
|
*/
|
|
|
|
if (fp->f_type != DTYPE_SOCKET ||
|
|
|
|
fp->f_count == 0 || fp->f_unpcount != 0 ||
|
|
|
|
fp->f_count != fp->f_msgcount ||
|
|
|
|
(fp->f_flag & (FMARK | FSCAN)) != FSCAN) {
|
|
|
|
mutex_exit(&fp->f_lock);
|
|
|
|
continue;
|
1994-05-04 13:50:11 +04:00
|
|
|
}
|
2009-03-11 09:05:29 +03:00
|
|
|
|
|
|
|
/* Gain file ref, mark our position, and unlock. */
|
|
|
|
LIST_INSERT_AFTER(fp, dp, f_list);
|
|
|
|
fp->f_count++;
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_exit(&fp->f_lock);
|
2009-03-11 09:05:29 +03:00
|
|
|
mutex_exit(&filelist_lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Flush all data from the socket's receive buffer.
|
|
|
|
* This will cause files referenced only by the
|
|
|
|
* socket to be queued for close.
|
|
|
|
*/
|
2014-09-05 13:20:59 +04:00
|
|
|
so = fp->f_socket;
|
2009-03-11 09:05:29 +03:00
|
|
|
solock(so);
|
|
|
|
sorflush(so);
|
|
|
|
sounlock(so);
|
|
|
|
|
|
|
|
/* Re-lock and restart from where we left off. */
|
|
|
|
closef(fp);
|
|
|
|
mutex_enter(&filelist_lock);
|
|
|
|
np = LIST_NEXT(dp, f_list);
|
|
|
|
LIST_REMOVE(dp, f_list);
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
2009-03-11 09:05:29 +03:00
|
|
|
}
|
2007-10-08 19:12:05 +04:00
|
|
|
|
2009-03-11 09:05:29 +03:00
|
|
|
/*
|
|
|
|
* Garbage collector thread. While SCM_RIGHTS messages are in transit,
|
|
|
|
* wake once per second to garbage collect. Run continually while we
|
|
|
|
* have deferred closes to process.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
unp_thread(void *cookie)
|
|
|
|
{
|
|
|
|
file_t *dp;
|
|
|
|
|
|
|
|
/* Allocate a dummy file for our scans. */
|
|
|
|
if ((dp = fgetdummy()) == NULL) {
|
|
|
|
panic("unp_thread");
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex_enter(&filelist_lock);
|
|
|
|
for (;;) {
|
|
|
|
KASSERT(mutex_owned(&filelist_lock));
|
|
|
|
if (SLIST_EMPTY(&unp_thread_discard)) {
|
|
|
|
if (unp_rights != 0) {
|
|
|
|
(void)cv_timedwait(&unp_thread_cv,
|
|
|
|
&filelist_lock, hz);
|
|
|
|
} else {
|
|
|
|
cv_wait(&unp_thread_cv, &filelist_lock);
|
|
|
|
}
|
2008-04-24 15:38:36 +04:00
|
|
|
}
|
2009-03-11 09:05:29 +03:00
|
|
|
unp_gc(dp);
|
1999-03-22 20:54:38 +03:00
|
|
|
}
|
2009-03-11 09:05:29 +03:00
|
|
|
/* NOTREACHED */
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Kick the garbage collector into action if there is something for
|
|
|
|
* it to process.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
unp_thread_kick(void)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (!SLIST_EMPTY(&unp_thread_discard) || unp_rights != 0) {
|
|
|
|
mutex_enter(&filelist_lock);
|
|
|
|
cv_signal(&unp_thread_cv);
|
|
|
|
mutex_exit(&filelist_lock);
|
1999-05-06 00:01:01 +04:00
|
|
|
}
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
|
1993-06-27 10:01:27 +04:00
|
|
|
void
|
2004-04-19 01:48:15 +04:00
|
|
|
unp_dispose(struct mbuf *m)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
1994-05-04 13:50:11 +04:00
|
|
|
|
1993-03-21 12:45:37 +03:00
|
|
|
if (m)
|
2009-03-11 09:05:29 +03:00
|
|
|
unp_scan(m, unp_discard_later, 1);
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
|
1993-06-27 10:01:27 +04:00
|
|
|
void
|
2008-03-22 00:54:58 +03:00
|
|
|
unp_scan(struct mbuf *m0, void (*op)(file_t *), int discard)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
2000-03-30 13:27:11 +04:00
|
|
|
struct mbuf *m;
|
2009-03-11 09:05:29 +03:00
|
|
|
file_t **rp, *fp;
|
2000-03-30 13:27:11 +04:00
|
|
|
struct cmsghdr *cm;
|
2009-03-11 09:05:29 +03:00
|
|
|
int i, qfds;
|
1993-03-21 12:45:37 +03:00
|
|
|
|
|
|
|
while (m0) {
|
2000-06-05 20:29:45 +04:00
|
|
|
for (m = m0; m; m = m->m_next) {
|
2009-03-11 09:05:29 +03:00
|
|
|
if (m->m_type != MT_CONTROL ||
|
|
|
|
m->m_len < sizeof(*cm)) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
cm = mtod(m, struct cmsghdr *);
|
|
|
|
if (cm->cmsg_level != SOL_SOCKET ||
|
|
|
|
cm->cmsg_type != SCM_RIGHTS)
|
|
|
|
continue;
|
|
|
|
qfds = (cm->cmsg_len - CMSG_ALIGN(sizeof(*cm)))
|
|
|
|
/ sizeof(file_t *);
|
|
|
|
rp = (file_t **)CMSG_DATA(cm);
|
|
|
|
for (i = 0; i < qfds; i++) {
|
|
|
|
fp = *rp;
|
|
|
|
if (discard) {
|
|
|
|
*rp = 0;
|
1999-03-22 20:54:38 +03:00
|
|
|
}
|
2009-03-11 09:05:29 +03:00
|
|
|
(*op)(fp);
|
|
|
|
rp++;
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
2000-06-05 20:29:45 +04:00
|
|
|
}
|
2001-10-19 00:17:24 +04:00
|
|
|
m0 = m0->m_nextpkt;
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
1993-06-27 10:01:27 +04:00
|
|
|
void
|
2008-03-22 00:54:58 +03:00
|
|
|
unp_mark(file_t *fp)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
2005-02-27 00:34:55 +03:00
|
|
|
|
2007-10-08 19:12:05 +04:00
|
|
|
if (fp == NULL)
|
1993-03-21 12:45:37 +03:00
|
|
|
return;
|
1999-03-22 20:54:38 +03:00
|
|
|
|
|
|
|
/* If we're already deferred, don't screw up the defer count */
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_enter(&fp->f_lock);
|
2007-10-08 19:12:05 +04:00
|
|
|
if (fp->f_flag & (FMARK | FDEFER)) {
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_exit(&fp->f_lock);
|
1999-03-22 20:54:38 +03:00
|
|
|
return;
|
2007-10-08 19:12:05 +04:00
|
|
|
}
|
1999-03-22 20:54:38 +03:00
|
|
|
|
|
|
|
/*
|
2009-03-11 09:05:29 +03:00
|
|
|
* Minimize the number of deferrals... Sockets are the only type of
|
|
|
|
* file which can hold references to another file, so just mark
|
|
|
|
* other files, and defer unmarked sockets for the next pass.
|
1999-03-22 20:54:38 +03:00
|
|
|
*/
|
|
|
|
if (fp->f_type == DTYPE_SOCKET) {
|
|
|
|
unp_defer++;
|
2008-03-22 00:54:58 +03:00
|
|
|
KASSERT(fp->f_count != 0);
|
|
|
|
atomic_or_uint(&fp->f_flag, FDEFER);
|
1999-03-22 20:54:38 +03:00
|
|
|
} else {
|
2008-03-22 00:54:58 +03:00
|
|
|
atomic_or_uint(&fp->f_flag, FMARK);
|
1999-03-22 20:54:38 +03:00
|
|
|
}
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_exit(&fp->f_lock);
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
|
|
|
|
2009-03-11 09:05:29 +03:00
|
|
|
static void
|
|
|
|
unp_discard_now(file_t *fp)
|
1993-03-21 12:45:37 +03:00
|
|
|
{
|
2008-03-22 00:54:58 +03:00
|
|
|
|
1999-03-22 20:54:38 +03:00
|
|
|
if (fp == NULL)
|
|
|
|
return;
|
2008-03-22 00:54:58 +03:00
|
|
|
|
|
|
|
KASSERT(fp->f_count > 0);
|
2009-03-11 09:05:29 +03:00
|
|
|
KASSERT(fp->f_msgcount > 0);
|
|
|
|
|
|
|
|
mutex_enter(&fp->f_lock);
|
1993-03-21 12:45:37 +03:00
|
|
|
fp->f_msgcount--;
|
2008-03-22 00:54:58 +03:00
|
|
|
mutex_exit(&fp->f_lock);
|
|
|
|
atomic_dec_uint(&unp_rights);
|
|
|
|
(void)closef(fp);
|
1993-03-21 12:45:37 +03:00
|
|
|
}
|
2009-03-11 09:05:29 +03:00
|
|
|
|
|
|
|
static void
|
|
|
|
unp_discard_later(file_t *fp)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (fp == NULL)
|
|
|
|
return;
|
|
|
|
|
|
|
|
KASSERT(fp->f_count > 0);
|
|
|
|
KASSERT(fp->f_msgcount > 0);
|
|
|
|
|
|
|
|
mutex_enter(&filelist_lock);
|
|
|
|
if (fp->f_unpcount++ == 0) {
|
|
|
|
SLIST_INSERT_HEAD(&unp_thread_discard, fp, f_unplist);
|
|
|
|
}
|
|
|
|
mutex_exit(&filelist_lock);
|
|
|
|
}
|
2014-05-18 18:46:15 +04:00
|
|
|
|
2018-05-05 22:58:08 +03:00
|
|
|
void
|
|
|
|
unp_sysctl_create(struct sysctllog **clog)
|
|
|
|
{
|
|
|
|
sysctl_createv(clog, 0, NULL, NULL,
|
|
|
|
CTLFLAG_PERMANENT|CTLFLAG_READWRITE,
|
|
|
|
CTLTYPE_LONG, "sendspace",
|
|
|
|
SYSCTL_DESCR("Default stream send space"),
|
|
|
|
NULL, 0, &unpst_sendspace, 0,
|
|
|
|
CTL_NET, PF_LOCAL, SOCK_STREAM, CTL_CREATE, CTL_EOL);
|
|
|
|
sysctl_createv(clog, 0, NULL, NULL,
|
|
|
|
CTLFLAG_PERMANENT|CTLFLAG_READWRITE,
|
|
|
|
CTLTYPE_LONG, "recvspace",
|
|
|
|
SYSCTL_DESCR("Default stream recv space"),
|
|
|
|
NULL, 0, &unpst_recvspace, 0,
|
|
|
|
CTL_NET, PF_LOCAL, SOCK_STREAM, CTL_CREATE, CTL_EOL);
|
|
|
|
sysctl_createv(clog, 0, NULL, NULL,
|
|
|
|
CTLFLAG_PERMANENT|CTLFLAG_READWRITE,
|
|
|
|
CTLTYPE_LONG, "sendspace",
|
|
|
|
SYSCTL_DESCR("Default datagram send space"),
|
|
|
|
NULL, 0, &unpdg_sendspace, 0,
|
|
|
|
CTL_NET, PF_LOCAL, SOCK_DGRAM, CTL_CREATE, CTL_EOL);
|
|
|
|
sysctl_createv(clog, 0, NULL, NULL,
|
|
|
|
CTLFLAG_PERMANENT|CTLFLAG_READWRITE,
|
|
|
|
CTLTYPE_LONG, "recvspace",
|
|
|
|
SYSCTL_DESCR("Default datagram recv space"),
|
|
|
|
NULL, 0, &unpdg_recvspace, 0,
|
|
|
|
CTL_NET, PF_LOCAL, SOCK_DGRAM, CTL_CREATE, CTL_EOL);
|
|
|
|
sysctl_createv(clog, 0, NULL, NULL,
|
|
|
|
CTLFLAG_PERMANENT|CTLFLAG_READONLY,
|
|
|
|
CTLTYPE_INT, "inflight",
|
|
|
|
SYSCTL_DESCR("File descriptors in flight"),
|
|
|
|
NULL, 0, &unp_rights, 0,
|
|
|
|
CTL_NET, PF_LOCAL, CTL_CREATE, CTL_EOL);
|
|
|
|
sysctl_createv(clog, 0, NULL, NULL,
|
|
|
|
CTLFLAG_PERMANENT|CTLFLAG_READONLY,
|
|
|
|
CTLTYPE_INT, "deferred",
|
|
|
|
SYSCTL_DESCR("File descriptors deferred for close"),
|
|
|
|
NULL, 0, &unp_defer, 0,
|
|
|
|
CTL_NET, PF_LOCAL, CTL_CREATE, CTL_EOL);
|
|
|
|
}
|
|
|
|
|
2014-05-18 18:46:15 +04:00
|
|
|
const struct pr_usrreqs unp_usrreqs = {
|
2014-05-19 06:51:24 +04:00
|
|
|
.pr_attach = unp_attach,
|
|
|
|
.pr_detach = unp_detach,
|
2014-07-09 18:41:42 +04:00
|
|
|
.pr_accept = unp_accept,
|
2014-07-24 19:12:03 +04:00
|
|
|
.pr_bind = unp_bind,
|
|
|
|
.pr_listen = unp_listen,
|
2014-07-30 14:04:25 +04:00
|
|
|
.pr_connect = unp_connect,
|
2014-08-09 09:33:00 +04:00
|
|
|
.pr_connect2 = unp_connect2,
|
split PRU_DISCONNECT, PRU_SHUTDOWN and PRU_ABORT function out of
pr_generic() usrreq switches and put into separate functions
xxx_disconnect(struct socket *)
xxx_shutdown(struct socket *)
xxx_abort(struct socket *)
- always KASSERT(solocked(so)) even if not implemented
- replace calls to pr_generic() with req =
PRU_{DISCONNECT,SHUTDOWN,ABORT}
with calls to pr_{disconnect,shutdown,abort}() respectively
rename existing internal functions used to implement above functionality
to permit use of the names for xxx_{disconnect,shutdown,abort}().
- {l2cap,sco,rfcomm}_disconnect() ->
{l2cap,sco,rfcomm}_disconnect_pcb()
- {unp,rip,tcp}_disconnect() -> {unp,rip,tcp}_disconnect1()
- unp_shutdown() -> unp_shutdown1()
patch reviewed by rmind
2014-07-31 07:39:35 +04:00
|
|
|
.pr_disconnect = unp_disconnect,
|
|
|
|
.pr_shutdown = unp_shutdown,
|
|
|
|
.pr_abort = unp_abort,
|
2014-06-22 12:10:18 +04:00
|
|
|
.pr_ioctl = unp_ioctl,
|
2014-07-06 07:33:33 +04:00
|
|
|
.pr_stat = unp_stat,
|
* split PRU_PEERADDR and PRU_SOCKADDR function out of pr_generic()
usrreq switches and put into separate functions
xxx_{peer,sock}addr(struct socket *, struct mbuf *).
- KASSERT(solocked(so)) always in new functions even if request
is not implemented
- KASSERT(pcb != NULL) and KASSERT(nam) if the request is
implemented and not for tcp.
* for tcp roll #ifdef KPROF and #ifdef DEBUG code from tcp_usrreq() into
easier to cut & paste functions tcp_debug_capture() and
tcp_debug_trace()
- functions provided by rmind
- remaining use of PRU_{PEER,SOCK}ADDR #define to be removed in a
future commit.
* rename netbt functions to permit consistency of pru function names
(as has been done with other requests already split out).
- l2cap_{peer,sock}addr() -> l2cap_{peer,sock}_addr_pcb()
- rfcomm_{peer,sock}addr() -> rfcomm_{peer,sock}_addr_pcb()
- sco_{peer,sock}addr() -> sco_{peer,sock}_addr_pcb()
* split/refactor do_sys_getsockname(lwp, fd, which, nam) into
two functions do_sys_get{peer,sock}name(fd, nam).
- move PRU_PEERADDR handling into do_sys_getpeername() from
do_sys_getsockname()
- have svr4_stream directly call do_sys_get{sock,peer}name()
respectively instead of providing `which' & fix a DPRINTF string
that incorrectly wrote "getpeername" when it meant "getsockname"
- fix sys_getpeername() and sys_getsockname() to call
do_sys_get{sock,peer}name() without `which' and `lwp' & adjust
comments
- bump kernel version for removal of lwp & which parameters from
do_sys_getsockname()
note: future cleanup to remove struct mbuf * abuse in
xxx_{peer,sock}name()
still to come, not done in this commit since it is easier to do post
split.
patch reviewed by rmind
welcome to 6.99.47
2014-07-09 08:54:03 +04:00
|
|
|
.pr_peeraddr = unp_peeraddr,
|
|
|
|
.pr_sockaddr = unp_sockaddr,
|
2014-08-08 07:05:44 +04:00
|
|
|
.pr_rcvd = unp_rcvd,
|
2014-07-23 17:17:18 +04:00
|
|
|
.pr_recvoob = unp_recvoob,
|
2014-08-05 11:55:31 +04:00
|
|
|
.pr_send = unp_send,
|
2014-07-23 17:17:18 +04:00
|
|
|
.pr_sendoob = unp_sendoob,
|
2014-05-18 18:46:15 +04:00
|
|
|
};
|