- Mar 26, 2024
-
-
Jens Axboe authored
Commit a4104821 upstream. Since we no longer allow sending io_uring fds over SCM_RIGHTS, move to using io_is_uring_fops() to detect whether this is a io_uring fd or not. With that done, kill off io_uring_get_socket() as nobody calls it anymore. This is in preparation to yanking out the rest of the core related to unix gc with io_uring. Signed-off-by:
Jens Axboe <axboe@kernel.dk> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
- Mar 15, 2024
-
-
Sasha Levin authored
Tested-by:
Pavel Machek (CIP) <pavel@denx.de> Tested-by:
Dominique Martinet <dominique.martinet@atmark-techno.com> Tested-by:
Linux Kernel Functional Testing <lkft@linaro.org> Tested-by:
Florian Fainelli <florian.fainelli@broadcom.com> Tested-by:
kernelci.org bot <bot@kernelci.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Jan Kundrát authored
[ Upstream commit 3f42b142 ] After upgrading from 5.16 to 6.1, our board with a MAX14830 started producing lots of garbage data over UART. Bisection pointed out commit 285e76fc as the culprit. That patch tried to replace hand-written code which I added in 2b4bac48 ("serial: max310x: Use batched reads when reasonably safe") with the generic regmap infrastructure for batched operations. Unfortunately, the `regmap_raw_read` and `regmap_raw_write` which were used are actually functions which perform IO over *multiple* registers. That's not what is needed for accessing these Tx/Rx FIFOs; the appropriate functions are the `_noinc_` versions, not the `_raw_` ones. Fix this regression by using `regmap_noinc_read()` and `regmap_noinc_write()` along with the necessary `regmap_config` setup; with this patch in place, our board communicates happily again. Since our board uses SPI for talking to this chip, the I2C part is completely untested. Fixes: 285e76fc ("serial: max310x: use regmap methods for SPI batch operations") Cc: stable@vger.kernel.org Reviewed-by:
Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by:
Jan Kundrát <jan.kundrat@cesnet.cz> Link: https://lore.kernel.org/r/79db8e82aadb0e174bc82b9996423c3503c8fb37.1680732084.git.jan.kundrat@cesnet.cz Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Cosmin Tanislav authored
[ Upstream commit 2e1f2d9a ] I2C implementation on this chip has a few key differences compared to SPI, as described in previous patches. * extended register space access needs no extra logic * slave address is used to select which UART to communicate with To accommodate these differences, add an I2C interface config, set the RevID register address and implement an empty method for setting the GlobalCommand register, since no special handling is needed for the extended register space. To handle the port-specific slave address, create an I2C dummy device for each port, except the base one (UART0), which is expected to be the one specified in firmware, and create a regmap for each I2C device. Add minimum and maximum slave addresses to each devtype for sanity checking. Also, use a separate regmap config with no write_flag_mask, since I2C has a R/W bit in its slave address, and set the max register to the address of the RevID register, since the extended register space needs no extra logic. Finally, add the I2C driver. Reviewed-by:
Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by:
Cosmin Tanislav <cosmin.tanislav@analog.com> Link: https://lore.kernel.org/r/20220605144659.4169853-5-demonsingur@gmail.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Stable-dep-of: 3f42b142 ("serial: max310x: fix IO data corruption in batched operations") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Cosmin Tanislav authored
[ Upstream commit b3883ab5 ] SPI can only use 5 address bits, since one bit is reserved for specifying R/W and 2 bits are used to specify the UART port. To access registers that have addresses past 0x1F, an extended register space can be enabled by writing to the GlobalCommand register (address 0x1F). I2C uses 8 address bits. The R/W bit is placed in the slave address, and so is the UART port. Because of this, registers that have addresses higher than 0x1F can be accessed normally. To access the RevID register, on SPI, 0xCE must be written to the 0x1F address to enable the extended register space, after which the RevID register is accessible at address 0x5. 0xCD must be written to the 0x1F address to disable the extended register space. On I2C, the RevID register is accessible at address 0x25. Create an interface config struct, and add a method for toggling the extended register space and a member for the RevId register address. Implement these for SPI. Reviewed-by:
Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by:
Cosmin Tanislav <cosmin.tanislav@analog.com> Link: https://lore.kernel.org/r/20220605144659.4169853-4-demonsingur@gmail.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Stable-dep-of: 3f42b142 ("serial: max310x: fix IO data corruption in batched operations") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Marek Vasut authored
[ Upstream commit d77e7456 ] Currently the regmap_config structure only allows the user to implement single element register read/write using .reg_read/.reg_write callbacks. The regmap_bus already implements bulk counterparts of both, and is being misused as a workaround for the missing bulk read/write callbacks in regmap_config by a couple of drivers. To stop this misuse, add the bulk read/write callbacks to regmap_config and call them from the regmap core code. Signed-off-by:
Marek Vasut <marex@denx.de> Cc: Jagan Teki <jagan@amarulasolutions.com> Cc: Mark Brown <broonie@kernel.org> Cc: Maxime Ripard <maxime@cerno.tech> Cc: Robert Foss <robert.foss@linaro.org> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Thomas Zimmermann <tzimmermann@suse.de> To: dri-devel@lists.freedesktop.org Link: https://lore.kernel.org/r/20220430025145.640305-1-marex@denx.de Signed-off-by:
Mark Brown <broonie@kernel.org> Stable-dep-of: 3f42b142 ("serial: max310x: fix IO data corruption in batched operations") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Ansuel Smith authored
[ Upstream commit 02d6fdec ] Some device requires a special handling for reg_update_bits and can't use the normal regmap read write logic. An example is when locking is handled by the device and rmw operations requires to do atomic operations. Allow to declare a dedicated function in regmap_config for reg_update_bits in no bus configuration. Signed-off-by:
Ansuel Smith <ansuelsmth@gmail.com> Link: https://lore.kernel.org/r/20211104150040.1260-1-ansuelsmth@gmail.com Signed-off-by:
Mark Brown <broonie@kernel.org> Stable-dep-of: 3f42b142 ("serial: max310x: fix IO data corruption in batched operations") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Andrea Parri (Microsoft) authored
[ Upstream commit 0c85c54b ] Running out of request IDs on a channel essentially produces the same effect as running out of space in the ring buffer, in that -EAGAIN is returned. The error message in hv_ringbuffer_write() should either be dropped (since we don't output a message when the ring buffer is full) or be made conditional/debug-only. Suggested-by:
Michael Kelley <mikelley@microsoft.com> Signed-off-by:
Andrea Parri (Microsoft) <parri.andrea@gmail.com> Fixes: e8b7db38 ("Drivers: hv: vmbus: Add vmbus_requestor data structure for VMBus hardening") Link: https://lore.kernel.org/r/20210301191348.196485-1-parri.andrea@gmail.com Signed-off-by:
Wei Liu <wei.liu@kernel.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Andy Shevchenko authored
[ Upstream commit 61acabaa ] In one error case the clock may be left prepared and enabled. Unprepare and disable clock in that case to balance state of the hardware. Fixes: d4d6f03c ("serial: max310x: Try to get crystal clock rate from property") Reported-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20210625153733.12911-1-andriy.shevchenko@linux.intel.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Oleg Nesterov authored
[ Upstream commit f7ec1cd5 ] lock_task_sighand() can trigger a hard lockup. If NR_CPUS threads call getrusage() at the same time and the process has NR_THREADS, spin_lock_irq will spin with irqs disabled O(NR_CPUS * NR_THREADS) time. Change getrusage() to use sig->stats_lock, it was specifically designed for this type of use. This way it runs lockless in the likely case. TODO: - Change do_task_stat() to use sig->stats_lock too, then we can remove spin_lock_irq(siglock) in wait_task_zombie(). - Turn sig->stats_lock into seqcount_rwlock_t, this way the readers in the slow mode won't exclude each other. See https://lore.kernel.org/all/20230913154907.GA26210@redhat.com/ - stats_lock has to disable irqs because ->siglock can be taken in irq context, it would be very nice to change __exit_signal() to avoid the siglock->stats_lock dependency. Link: https://lkml.kernel.org/r/20240122155053.GA26214@redhat.com Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Reported-by:
Dylan Hatch <dylanbhatch@google.com> Tested-by:
Dylan Hatch <dylanbhatch@google.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Oleg Nesterov authored
[ Upstream commit 13b7bc60 ] do/while_each_thread should be avoided when possible. Plus this change allows to avoid lock_task_sighand(), we can use rcu and/or sig->stats_lock instead. Link: https://lkml.kernel.org/r/20230909172629.GA20454@redhat.com Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Stable-dep-of: f7ec1cd5 ("getrusage: use sig->stats_lock rather than lock_task_sighand()") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Oleg Nesterov authored
[ Upstream commit daa694e4 ] Patch series "getrusage: use sig->stats_lock", v2. This patch (of 2): thread_group_cputime() does its own locking, we can safely shift thread_group_cputime_adjusted() which does another for_each_thread loop outside of ->siglock protected section. This is also preparation for the next patch which changes getrusage() to use stats_lock instead of siglock, thread_group_cputime() takes the same lock. With the current implementation recursive read_seqbegin_or_lock() is fine, thread_group_cputime() can't enter the slow mode if the caller holds stats_lock, yet this looks more safe and better performance-wise. Link: https://lkml.kernel.org/r/20240122155023.GA26169@redhat.com Link: https://lkml.kernel.org/r/20240122155050.GA26205@redhat.com Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Reported-by:
Dylan Hatch <dylanbhatch@google.com> Tested-by:
Dylan Hatch <dylanbhatch@google.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Oleg Nesterov authored
[ Upstream commit c7ac8231 ] No functional changes, cleanup/preparation. Link: https://lkml.kernel.org/r/20230909172554.GA20441@redhat.com Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Stable-dep-of: daa694e4 ("getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand()") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Prakash Sangappa authored
[ Upstream commit e656c7a9 ] For shared memory of type SHM_HUGETLB, hugetlb pages are reserved in shmget() call. If SHM_NORESERVE flags is specified then the hugetlb pages are not reserved. However when the shared memory is attached with the shmat() call the hugetlb pages are getting reserved incorrectly for SHM_HUGETLB shared memory created with SHM_NORESERVE which is a bug. ------------------------------- Following test shows the issue. $cat shmhtb.c int main() { int shmflags = 0660 | IPC_CREAT | SHM_HUGETLB | SHM_NORESERVE; int shmid; shmid = shmget(SKEY, SHMSZ, shmflags); if (shmid < 0) { printf("shmat: shmget() failed, %d\n", errno); return 1; } printf("After shmget()\n"); system("cat /proc/meminfo | grep -i hugepages_"); shmat(shmid, NULL, 0); printf("\nAfter shmat()\n"); system("cat /proc/meminfo | grep -i hugepages_"); shmctl(shmid, IPC_RMID, NULL); return 0; } #sysctl -w vm.nr_hugepages=20 #./shmhtb After shmget() HugePages_Total: 20 HugePages_Free: 20 HugePages_Rsvd: 0 HugePages_Surp: 0 After shmat() HugePages_Total: 20 HugePages_Free: 20 HugePages_Rsvd: 5 <-- HugePages_Surp: 0 -------------------------------- Fix is to ensure that hugetlb pages are not reserved for SHM_HUGETLB shared memory in the shmat() call. Link: https://lkml.kernel.org/r/1706040282-12388-1-git-send-email-prakash.sangappa@oracle.com Signed-off-by:
Prakash Sangappa <prakash.sangappa@oracle.com> Acked-by:
Muchun Song <muchun.song@linux.dev> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Mike Kravetz authored
[ Upstream commit 33b8f84a ] While reviewing a bug in hugetlb_reserve_pages, it was noticed that all callers ignore the return value. Any failure is considered an ENOMEM error by the callers. Change the function to be of type bool. The function will return true if the reservation was successful, false otherwise. Callers currently assume a zero return code indicates success. Change the callers to look for true to indicate success. No functional change, only code cleanup. Link: https://lkml.kernel.org/r/20201221192542.15732-1-mike.kravetz@oracle.com Signed-off-by:
Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Cc: David Hildenbrand <david@redhat.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Davidlohr Bueso <dave@stgolabs.net> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Stable-dep-of: e656c7a9 ("mm: hugetlb pages should not be reserved by shmat() if SHM_NORESERVE") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Shradha Gupta authored
[ Upstream commit 9cae43da ] If hv_netvsc driver is unloaded and reloaded, the NET_DEVICE_REGISTER handler cannot perform VF register successfully as the register call is received before netvsc_probe is finished. This is because we register register_netdevice_notifier() very early( even before vmbus_driver_register()). To fix this, we try to register each such matching VF( if it is visible as a netdevice) at the end of netvsc_probe. Cc: stable@vger.kernel.org Fixes: 85520856 ("hv_netvsc: Fix race of register_netdevice_notifier and VF register") Suggested-by:
Dexuan Cui <decui@microsoft.com> Signed-off-by:
Shradha Gupta <shradhagupta@linux.microsoft.com> Reviewed-by:
Haiyang Zhang <haiyangz@microsoft.com> Reviewed-by:
Dexuan Cui <decui@microsoft.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Juhee Kang authored
[ Upstream commit c60882a4 ] Use netif_is_bond_master() function instead of open code, which is ((event_dev->priv_flags & IFF_BONDING) && (event_dev->flags & IFF_MASTER)). This patch doesn't change logic. Signed-off-by:
Juhee Kang <claudiajkang@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Stable-dep-of: 9cae43da ("hv_netvsc: Register VF in netvsc_probe if NET_DEVICE_REGISTER missed") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Dexuan Cui authored
[ Upstream commit 64ff412a ] Currently the netvsc/VF binding logic only checks the PCI serial number. The Microsoft Azure Network Adapter (MANA) supports multiple net_device interfaces (each such interface is called a "vPort", and has its unique MAC address) which are backed by the same VF PCI device, so the binding logic should check both the MAC address and the PCI serial number. The change should not break any other existing VF drivers, because Hyper-V NIC SR-IOV implementation requires the netvsc network interface and the VF network interface have the same MAC address. Co-developed-by:
Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by:
Haiyang Zhang <haiyangz@microsoft.com> Co-developed-by:
Shachar Raindel <shacharr@microsoft.com> Signed-off-by:
Shachar Raindel <shacharr@microsoft.com> Acked-by:
Stephen Hemminger <stephen@networkplumber.org> Signed-off-by:
Dexuan Cui <decui@microsoft.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Stable-dep-of: 9cae43da ("hv_netvsc: Register VF in netvsc_probe if NET_DEVICE_REGISTER missed") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Long Li authored
[ Upstream commit 34b06a2e ] On VF hot remove, NETDEV_GOING_DOWN is sent to notify the VF is about to go down. At this time, the VF is still sending/receiving traffic and we request the VSP to switch datapath. On completion, the datapath is switched to synthetic and we can proceed with VF hot remove. Signed-off-by:
Long Li <longli@microsoft.com> Reviewed-by:
Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org> Stable-dep-of: 9cae43da ("hv_netvsc: Register VF in netvsc_probe if NET_DEVICE_REGISTER missed") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Long Li authored
[ Upstream commit 8b31f8c9 ] The completion indicates if NVSP_MSG4_TYPE_SWITCH_DATA_PATH has been processed by the VSP. The traffic is steered to VF or synthetic after we receive this completion. Signed-off-by:
Long Li <longli@microsoft.com> Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org> Stable-dep-of: 9cae43da ("hv_netvsc: Register VF in netvsc_probe if NET_DEVICE_REGISTER missed") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Andres Beltran authored
[ Upstream commit 4d18fcc9 ] Currently, pointers to guest memory are passed to Hyper-V as transaction IDs in netvsc. In the face of errors or malicious behavior in Hyper-V, netvsc should not expose or trust the transaction IDs returned by Hyper-V to be valid guest memory addresses. Instead, use small integers generated by vmbus_requestor as requests (transaction) IDs. Signed-off-by:
Andres Beltran <lkmlabelt@gmail.com> Co-developed-by:
Andrea Parri (Microsoft) <parri.andrea@gmail.com> Signed-off-by:
Andrea Parri (Microsoft) <parri.andrea@gmail.com> Reviewed-by:
Michael Kelley <mikelley@microsoft.com> Acked-by:
Jakub Kicinski <kuba@kernel.org> Reviewed-by:
Wei Liu <wei.liu@kernel.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Cc: netdev@vger.kernel.org Link: https://lore.kernel.org/r/20201109100402.8946-4-parri.andrea@gmail.com Signed-off-by:
Wei Liu <wei.liu@kernel.org> Stable-dep-of: 9cae43da ("hv_netvsc: Register VF in netvsc_probe if NET_DEVICE_REGISTER missed") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Andres Beltran authored
[ Upstream commit e8b7db38 ] Currently, VMbus drivers use pointers into guest memory as request IDs for interactions with Hyper-V. To be more robust in the face of errors or malicious behavior from a compromised Hyper-V, avoid exposing guest memory addresses to Hyper-V. Also avoid Hyper-V giving back a bad request ID that is then treated as the address of a guest data structure with no validation. Instead, encapsulate these memory addresses and provide small integers as request IDs. Signed-off-by:
Andres Beltran <lkmlabelt@gmail.com> Co-developed-by:
Andrea Parri (Microsoft) <parri.andrea@gmail.com> Signed-off-by:
Andrea Parri (Microsoft) <parri.andrea@gmail.com> Reviewed-by:
Michael Kelley <mikelley@microsoft.com> Reviewed-by:
Wei Liu <wei.liu@kernel.org> Link: https://lore.kernel.org/r/20201109100402.8946-2-parri.andrea@gmail.com Signed-off-by:
Wei Liu <wei.liu@kernel.org> Stable-dep-of: 9cae43da ("hv_netvsc: Register VF in netvsc_probe if NET_DEVICE_REGISTER missed") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Zhang Yi authored
[ Upstream commit acf795dc ] ext4_da_map_blocks() only hold i_data_sem in shared mode and i_rwsem when inserting delalloc extents, it could be raced by another querying path of ext4_map_blocks() without i_rwsem, .e.g buffered read path. Suppose we buffered read a file containing just a hole, and without any cached extents tree, then it is raced by another delayed buffered write to the same area or the near area belongs to the same hole, and the new delalloc extent could be overwritten to a hole extent. pread() pwrite() filemap_read_folio() ext4_mpage_readpages() ext4_map_blocks() down_read(i_data_sem) ext4_ext_determine_hole() //find hole ext4_ext_put_gap_in_cache() ext4_es_find_extent_range() //no delalloc extent ext4_da_map_blocks() down_read(i_data_sem) ext4_insert_delayed_block() //insert delalloc extent ext4_es_insert_extent() //overwrite delalloc extent to hole This race could lead to inconsistent delalloc extents tree and incorrect reserved space counter. Fix this by converting to hold i_data_sem in exclusive mode when adding a new delalloc extent in ext4_da_map_blocks(). Cc: stable@vger.kernel.org Signed-off-by:
Zhang Yi <yi.zhang@huawei.com> Suggested-by:
Jan Kara <jack@suse.cz> Reviewed-by:
Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20240127015825.1608160-3-yi.zhang@huaweicloud.com Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Zhang Yi authored
[ Upstream commit 3fcc2b88 ] Refactor and cleanup ext4_da_map_blocks(), reduce some unnecessary parameters and branches, no logic changes. Signed-off-by:
Zhang Yi <yi.zhang@huawei.com> Reviewed-by:
Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20240127015825.1608160-2-yi.zhang@huaweicloud.com Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Stable-dep-of: acf795dc ("ext4: convert to exclusive lock while inserting delalloc extents") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Baokun Li authored
[ Upstream commit 6c120399 ] Now ext4_es_insert_extent() never return error, so make it return void. Signed-off-by:
Baokun Li <libaokun1@huawei.com> Reviewed-by:
Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230424033846.4732-12-libaokun1@huawei.com Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Stable-dep-of: acf795dc ("ext4: convert to exclusive lock while inserting delalloc extents") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Ondrej Mosnacek authored
[ Upstream commit 5a287d3d ] For these hooks the true "neutral" value is -EOPNOTSUPP, which is currently what is returned when no LSM provides this hook and what LSMs return when there is no security context set on the socket. Correct the value in <linux/lsm_hooks.h> and adjust the dispatch functions in security/security.c to avoid issues when the BPF LSM is enabled. Cc: stable@vger.kernel.org Fixes: 98e828a0 ("security: Refactor declaration of LSM hooks") Signed-off-by:
Ondrej Mosnacek <omosnace@redhat.com> [PM: subject line tweak] Signed-off-by:
Paul Moore <paul@paul-moore.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Paul Moore authored
[ Upstream commit b10b9c34 ] Commit 4ff09db1 ("bpf: net: Change sk_getsockopt() to take the sockptr_t argument") made it possible to call sk_getsockopt() with both user and kernel address space buffers through the use of the sockptr_t type. Unfortunately at the time of conversion the security_socket_getpeersec_stream() LSM hook was written to only accept userspace buffers, and in a desire to avoid having to change the LSM hook the commit author simply passed the sockptr_t's userspace buffer pointer. Since the only sk_getsockopt() callers at the time of conversion which used kernel sockptr_t buffers did not allow SO_PEERSEC, and hence the security_socket_getpeersec_stream() hook, this was acceptable but also very fragile as future changes presented the possibility of silently passing kernel space pointers to the LSM hook. There are several ways to protect against this, including careful code review of future commits, but since relying on code review to catch bugs is a recipe for disaster and the upstream eBPF maintainer is "strongly against defensive programming", this patch updates the LSM hook, and all of the implementations to support sockptr_t and safely handle both user and kernel space buffers. Acked-by:
Casey Schaufler <casey@schaufler-ca.com> Acked-by:
John Johansen <john.johansen@canonical.com> Signed-off-by:
Paul Moore <paul@paul-moore.com> Stable-dep-of: 5a287d3d ("lsm: fix default return value of the socket_getpeersec_*() hooks") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Martin KaFai Lau authored
[ Upstream commit 4ff09db1 ] This patch changes sk_getsockopt() to take the sockptr_t argument such that it can be used by bpf_getsockopt(SOL_SOCKET) in a latter patch. security_socket_getpeersec_stream() is not changed. It stays with the __user ptr (optval.user and optlen.user) to avoid changes to other security hooks. bpf_getsockopt(SOL_SOCKET) also does not support SO_PEERSEC. Signed-off-by:
Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20220902002802.2888419-1-kafai@fb.com Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Stable-dep-of: 5a287d3d ("lsm: fix default return value of the socket_getpeersec_*() hooks") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Martin KaFai Lau authored
[ Upstream commit ba74a760 ] A latter patch refactors bpf_getsockopt(SOL_SOCKET) with the sock_getsockopt() to avoid code duplication and code drift between the two duplicates. The current sock_getsockopt() takes sock ptr as the argument. The very first thing of this function is to get back the sk ptr by 'sk = sock->sk'. bpf_getsockopt() could be called when the sk does not have the sock ptr created. Meaning sk->sk_socket is NULL. For example, when a passive tcp connection has just been established but has yet been accept()-ed. Thus, it cannot use the sock_getsockopt(sk->sk_socket) or else it will pass a NULL ptr. This patch moves all sock_getsockopt implementation to the newly added sk_getsockopt(). The new sk_getsockopt() takes a sk ptr and immediately gets the sock ptr by 'sock = sk->sk_socket' The existing sock_getsockopt(sock) is changed to call sk_getsockopt(sock->sk). All existing callers have both sock->sk and sk->sk_socket pointer. The latter patch will make bpf_getsockopt(SOL_SOCKET) call sk_getsockopt(sk) directly. The bpf_getsockopt(SOL_SOCKET) does not use the optnames that require sk->sk_socket, so it will be safe. Signed-off-by:
Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20220902002756.2887884-1-kafai@fb.com Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Stable-dep-of: 5a287d3d ("lsm: fix default return value of the socket_getpeersec_*() hooks") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Hugo Villeneuve authored
[ Upstream commit b35f8dbb ] If there is a problem after resetting a port, the do/while() loop that checks the default value of DIVLSB register may run forever and spam the I2C bus. Add a delay before each read of DIVLSB, and a maximum number of tries to prevent that situation from happening. Also fail probe if port reset is unsuccessful. Fixes: 10d8b34a ("serial: max310x: Driver rework") Cc: stable@vger.kernel.org Signed-off-by:
Hugo Villeneuve <hvilleneuve@dimonoff.com> Link: https://lore.kernel.org/r/20240116213001.3691629-5-hugo@hugovil.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Cosmin Tanislav authored
[ Upstream commit 6ef281da ] The driver currently does manual register manipulation in multiple places to talk to a specific UART port. In order to talk to a specific UART port over SPI, the bits U1 and U0 of the register address can be set, as explained in the Command byte configuration section of the datasheet. Make this more elegant by creating regmaps for each UART port and setting the read_flag_mask and write_flag_mask accordingly. All communcations regarding global registers are done on UART port 0, so replace the global regmap entirely with the port 0 regmap. Also, remove the 0x1f masks from reg_writeable(), reg_volatile() and reg_precious() methods, since setting the U1 and U0 bits of the register address happens inside the regmap core now. Reviewed-by:
Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by:
Cosmin Tanislav <cosmin.tanislav@analog.com> Link: https://lore.kernel.org/r/20220605144659.4169853-3-demonsingur@gmail.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Stable-dep-of: b35f8dbb ("serial: max310x: prevent infinite while() loop in port startup") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Cosmin Tanislav authored
[ Upstream commit 285e76fc ] The SPI batch read/write operations can be implemented as simple regmap raw read and write, which will also try to do a gather write just as it is done here. Use the regmap raw read and write methods. Reviewed-by:
Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by:
Cosmin Tanislav <cosmin.tanislav@analog.com> Link: https://lore.kernel.org/r/20220605144659.4169853-2-demonsingur@gmail.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Stable-dep-of: b35f8dbb ("serial: max310x: prevent infinite while() loop in port startup") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Andy Shevchenko authored
[ Upstream commit c808fab6 ] Device property API allows to gather device resources from different sources, such as ACPI. Convert the drivers to unleash the power of device property API. Signed-off-by:
Andy Shevchenko <andy.shevchenko@gmail.com> Link: https://lore.kernel.org/r/20201007084635.594991-1-andy.shevchenko@gmail.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Stable-dep-of: b35f8dbb ("serial: max310x: prevent infinite while() loop in port startup") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Hugo Villeneuve authored
[ Upstream commit 8afa6c6d ] A stable clock is really required in order to use this UART, so log an error message and bail out if the chip reports that the clock is not stable. Fixes: 4cf9a888 ("serial: max310x: Check the clock readiness") Cc: stable@vger.kernel.org Suggested-by:
Jan Kundrát <jan.kundrat@cesnet.cz> Link: https://www.spinics.net/lists/linux-serial/msg35773.html Signed-off-by:
Hugo Villeneuve <hvilleneuve@dimonoff.com> Link: https://lore.kernel.org/r/20240116213001.3691629-4-hugo@hugovil.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Andy Shevchenko authored
[ Upstream commit d4d6f03c ] In some configurations, mainly ACPI-based, the clock frequency of the device is supplied by very well established 'clock-frequency' property. Hence, try to get it from the property at last if no other providers are available. Signed-off-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20210517172930.83353-1-andriy.shevchenko@linux.intel.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Stable-dep-of: 8afa6c6d ("serial: max310x: fail probe if clock crystal is unstable") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Andy Shevchenko authored
[ Upstream commit 974e454d ] Simplify the code which fetches the input clock by using devm_clk_get_optional(). If no input clock is present devm_clk_get_optional() will return NULL instead of an error which matches the behavior of the old code. Signed-off-by:
Andy Shevchenko <andy.shevchenko@gmail.com> Link: https://lore.kernel.org/r/20201007084635.594991-2-andy.shevchenko@gmail.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Stable-dep-of: 8afa6c6d ("serial: max310x: fail probe if clock crystal is unstable") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Michal Pecio authored
[ Upstream commit 7c4650de ] xHCI 4.9 explicitly forbids assuming that the xHC has released its ownership of a multi-TRB TD when it reports an error on one of the early TRBs. Yet the driver makes such assumption and releases the TD, allowing the remaining TRBs to be freed or overwritten by new TDs. The xHC should also report completion of the final TRB due to its IOC flag being set by us, regardless of prior errors. This event cannot be recognized if the TD has already been freed earlier, resulting in "Transfer event TRB DMA ptr not part of current TD" error message. Fix this by reusing the logic for processing isoc Transaction Errors. This also handles hosts which fail to report the final completion. Fix transfer length reporting on Babble errors. They may be caused by device malfunction, no guarantee that the buffer has been filled. Signed-off-by:
Michal Pecio <michal.pecio@gmail.com> Cc: stable@vger.kernel.org Signed-off-by:
Mathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20240125152737.2983959-5-mathias.nyman@linux.intel.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Mathias Nyman authored
[ Upstream commit 5372c65e ] The last TRB of a isoc TD might not trigger an event if there was an error event for a TRB mid TD. This is seen on a NEC Corporation uPD720200 USB 3.0 Host After an error mid a multi-TRB TD the xHC should according to xhci 4.9.1 generate events for passed TRBs with IOC flag set if it proceeds to the next TD. This event is either a copy of the original error, or a "success" transfer event. If that event is missing then the driver and xHC host get out of sync as the driver is still expecting a transfer event for that first TD, while xHC host is already sending events for the next TD in the list. This leads to "Transfer event TRB DMA ptr not part of current TD" messages. As a solution we tag the isoc TDs that get error events mid TD. If an event doesn't match the first TD, then check if the tag is set, and event points to the next TD. In that case give back the fist TD and process the next TD normally Make sure TD status and transferred length stay valid in both cases with and without final TD completion event. Reported-by:
Michał Pecio <michal.pecio@gmail.com> Closes: https://lore.kernel.org/linux-usb/20240112235205.1259f60c@foxbook/ Tested-by:
Michał Pecio <michal.pecio@gmail.com> Cc: stable@vger.kernel.org Signed-off-by:
Mathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20240125152737.2983959-4-mathias.nyman@linux.intel.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Mathias Nyman authored
[ Upstream commit e9fcb077 ] The same values are parsed several times from transfer and event TRBs by different functions in the same call path, all while processing one transfer event. As the TRBs are in DMA memory and can be accessed by the xHC host we want to avoid this to prevent double-fetch issues. To resolve this pass the already parsed values to the different functions in the path of parsing a transfer event Signed-off-by:
Mathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210406070208.3406266-5-mathias.nyman@linux.intel.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Stable-dep-of: 5372c65e ("xhci: process isoc TD properly when there was a transaction error mid TD.") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Mathias Nyman authored
[ Upstream commit 55f6153d ] When finishing a TD we walk the endpoint dequeue trb pointer until it matches the last TRB of the TD. TDs can contain over 100 TRBs, meaning we call a function 100 times, do a few comparisons and increase a couple values for each of these calls, all in interrupt context. This can all be avoided by adding a pointer to the last TRB segment, and a number of TRBs in the TD. So instead of walking through each TRB just set the new dequeue segment, pointer, and number of free TRBs directly. Getting rid of the while loop also reduces the risk of getting stuck in a infinite loop in the interrupt handler. Loop relied on valid matching dequeue and last_trb values to break. Signed-off-by:
Mathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-12-mathias.nyman@linux.intel.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Stable-dep-of: 5372c65e ("xhci: process isoc TD properly when there was a transaction error mid TD.") Signed-off-by:
Sasha Levin <sashal@kernel.org>
-