Skip to content
Snippets Groups Projects
  1. Nov 03, 2022
    • Xiaobo Liu's avatar
      net/atm: fix proc_mpc_write incorrect return value · 726e4d16
      Xiaobo Liu authored
      
      [ Upstream commit d8bde3bf ]
      
      Then the input contains '\0' or '\n', proc_mpc_write has read them,
      so the return value needs +1.
      
      Fixes: 1da177e4 ("Linux-2.6.12-rc2")
      Signed-off-by: default avatarXiaobo Liu <cppcoffee@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      726e4d16
    • José Expósito's avatar
      HID: magicmouse: Do not set BTN_MOUSE on double report · 0b127ae5
      José Expósito authored
      [ Upstream commit bb5f0c85 ]
      
      Under certain conditions the Magic Trackpad can group 2 reports in a
      single packet. The packet is split and the raw event function is
      invoked recursively for each part.
      
      However, after processing each part, the BTN_MOUSE status is updated,
      sending multiple click events. [1]
      
      Return after processing double reports to avoid this issue.
      
      Link: https://gitlab.freedesktop.org/libinput/libinput/-/issues/811
      
        # [1]
      Fixes: a462230e ("HID: magicmouse: enable Magic Trackpad support")
      Reported-by: default avatarNulo <git@nulo.in>
      Signed-off-by: default avatarJosé Expósito <jose.exposito89@gmail.com>
      Signed-off-by: default avatarBenjamin Tissoires <benjamin.tissoires@redhat.com>
      Link: https://lore.kernel.org/r/20221009182747.90730-1-jose.exposito89@gmail.com
      
      
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      0b127ae5
    • Tony Luck's avatar
      ACPI: extlog: Handle multiple records · 1e1c94a9
      Tony Luck authored
      
      [ Upstream commit f6ec01da ]
      
      If there is no user space consumer of extlog_mem trace records, then
      Linux properly handles multiple error records in an ELOG block
      
      	extlog_print()
      	  print_extlog_rcd()
      	    __print_extlog_rcd()
      	      cper_estatus_print()
      		apei_estatus_for_each_section()
      
      But the other code path hard codes looking for a single record to
      output a trace record.
      
      Fix by using the same apei_estatus_for_each_section() iterator
      to step over all records.
      
      Fixes: 2dfb7d51 ("trace, RAS: Add eMCA trace event interface")
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      1e1c94a9
    • Filipe Manana's avatar
      btrfs: fix processing of delayed data refs during backref walking · df36a603
      Filipe Manana authored
      
      [ Upstream commit 4fc7b572 ]
      
      When processing delayed data references during backref walking and we are
      using a share context (we are being called through fiemap), whenever we
      find a delayed data reference for an inode different from the one we are
      interested in, then we immediately exit and consider the data extent as
      shared. This is wrong, because:
      
      1) This might be a DROP reference that will cancel out a reference in the
         extent tree;
      
      2) Even if it's an ADD reference, it may be followed by a DROP reference
         that cancels it out.
      
      In either case we should not exit immediately.
      
      Fix this by never exiting when we find a delayed data reference for
      another inode - instead add the reference and if it does not cancel out
      other delayed reference, we will exit early when we call
      extent_is_shared() after processing all delayed references. If we find
      a drop reference, then signal the code that processes references from
      the extent tree (add_inline_refs() and add_keyed_refs()) to not exit
      immediately if it finds there a reference for another inode, since we
      have delayed drop references that may cancel it out. In this later case
      we exit once we don't have references in the rb trees that cancel out
      each other and have two references for different inodes.
      
      Example reproducer for case 1):
      
         $ cat test-1.sh
         #!/bin/bash
      
         DEV=/dev/sdj
         MNT=/mnt/sdj
      
         mkfs.btrfs -f $DEV
         mount $DEV $MNT
      
         xfs_io -f -c "pwrite 0 64K" $MNT/foo
         cp --reflink=always $MNT/foo $MNT/bar
      
         echo
         echo "fiemap after cloning:"
         xfs_io -c "fiemap -v" $MNT/foo
      
         rm -f $MNT/bar
         echo
         echo "fiemap after removing file bar:"
         xfs_io -c "fiemap -v" $MNT/foo
      
         umount $MNT
      
      Running it before this patch, the extent is still listed as shared, it has
      the flag 0x2000 (FIEMAP_EXTENT_SHARED) set:
      
         $ ./test-1.sh
         fiemap after cloning:
         /mnt/sdj/foo:
          EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
            0: [0..127]:        26624..26751       128 0x2001
      
         fiemap after removing file bar:
         /mnt/sdj/foo:
          EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
            0: [0..127]:        26624..26751       128 0x2001
      
      Example reproducer for case 2):
      
         $ cat test-2.sh
         #!/bin/bash
      
         DEV=/dev/sdj
         MNT=/mnt/sdj
      
         mkfs.btrfs -f $DEV
         mount $DEV $MNT
      
         xfs_io -f -c "pwrite 0 64K" $MNT/foo
         cp --reflink=always $MNT/foo $MNT/bar
      
         # Flush delayed references to the extent tree and commit current
         # transaction.
         sync
      
         echo
         echo "fiemap after cloning:"
         xfs_io -c "fiemap -v" $MNT/foo
      
         rm -f $MNT/bar
         echo
         echo "fiemap after removing file bar:"
         xfs_io -c "fiemap -v" $MNT/foo
      
         umount $MNT
      
      Running it before this patch, the extent is still listed as shared, it has
      the flag 0x2000 (FIEMAP_EXTENT_SHARED) set:
      
         $ ./test-2.sh
         fiemap after cloning:
         /mnt/sdj/foo:
          EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
            0: [0..127]:        26624..26751       128 0x2001
      
         fiemap after removing file bar:
         /mnt/sdj/foo:
          EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
            0: [0..127]:        26624..26751       128 0x2001
      
      After this patch, after deleting bar in both tests, the extent is not
      reported with the 0x2000 flag anymore, it gets only the flag 0x1
      (which is FIEMAP_EXTENT_LAST):
      
         $ ./test-1.sh
         fiemap after cloning:
         /mnt/sdj/foo:
          EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
            0: [0..127]:        26624..26751       128 0x2001
      
         fiemap after removing file bar:
         /mnt/sdj/foo:
          EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
            0: [0..127]:        26624..26751       128   0x1
      
         $ ./test-2.sh
         fiemap after cloning:
         /mnt/sdj/foo:
          EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
            0: [0..127]:        26624..26751       128 0x2001
      
         fiemap after removing file bar:
         /mnt/sdj/foo:
          EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
            0: [0..127]:        26624..26751       128   0x1
      
      These tests will later be converted to a test case for fstests.
      
      Fixes: dc046b10 ("Btrfs: make fiemap not blow when you have lots of snapshots")
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      df36a603
    • Jean-Francois Le Fillatre's avatar
      r8152: add PID for the Lenovo OneLink+ Dock · c63a8cce
      Jean-Francois Le Fillatre authored
      
      commit 1bd3a383 upstream.
      
      The Lenovo OneLink+ Dock contains an RTL8153 controller that behaves as
      a broken CDC device by default. Add the custom Lenovo PID to the r8152
      driver to support it properly.
      
      Also, systems compatible with this dock provide a BIOS option to enable
      MAC address passthrough (as per Lenovo document "ThinkPad Docking
      Solutions 2017"). Add the custom PID to the MAC passthrough list too.
      
      Tested on a ThinkPad 13 1st gen with the expected results:
      
      passthrough disabled: Invalid header when reading pass-thru MAC addr
      passthrough enabled:  Using pass-thru MAC addr XX:XX:XX:XX:XX:XX
      
      Signed-off-by: default avatarJean-Francois Le Fillatre <jflf_kernel@gmx.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c63a8cce
    • James Morse's avatar
      arm64: errata: Remove AES hwcap for COMPAT tasks · 06035fd1
      James Morse authored
      
      commit 44b3834b upstream.
      
      Cortex-A57 and Cortex-A72 have an erratum where an interrupt that
      occurs between a pair of AES instructions in aarch32 mode may corrupt
      the ELR. The task will subsequently produce the wrong AES result.
      
      The AES instructions are part of the cryptographic extensions, which are
      optional. User-space software will detect the support for these
      instructions from the hwcaps. If the platform doesn't support these
      instructions a software implementation should be used.
      
      Remove the hwcap bits on affected parts to indicate user-space should
      not use the AES instructions.
      
      Acked-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Link: https://lore.kernel.org/r/20220714161523.279570-3-james.morse@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      [florian: resolved conflicts in arch/arm64/tools/cpucaps and cpu_errata.c]
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      06035fd1
    • Eric Ren's avatar
      KVM: arm64: vgic: Fix exit condition in scan_its_table() · 62ab3d75
      Eric Ren authored
      
      commit c000a260 upstream.
      
      With some PCIe topologies, restoring a guest fails while
      parsing the ITS device tables.
      
      Reproducer hints:
      1. Create ARM virt VM with pxb-pcie bus which adds
         extra host bridges, with qemu command like:
      
      ```
        -device pxb-pcie,bus_nr=8,id=pci.x,numa_node=0,bus=pcie.0 \
        -device pcie-root-port,..,bus=pci.x \
        ...
        -device pxb-pcie,bus_nr=37,id=pci.y,numa_node=1,bus=pcie.0 \
        -device pcie-root-port,..,bus=pci.y \
        ...
      
      ```
      2. Ensure the guest uses 2-level device table
      3. Perform VM migration which calls save/restore device tables
      
      In that setup, we get a big "offset" between 2 device_ids,
      which makes unsigned "len" round up a big positive number,
      causing the scan loop to continue with a bad GPA. For example:
      
      1. L1 table has 2 entries;
      2. and we are now scanning at L2 table entry index 2075 (pointed
         to by L1 first entry)
      3. if next device id is 9472, we will get a big offset: 7397;
      4. with unsigned 'len', 'len -= offset * esz', len will underflow to a
         positive number, mistakenly into next iteration with a bad GPA;
         (It should break out of the current L2 table scanning, and jump
         into the next L1 table entry)
      5. that bad GPA fails the guest read.
      
      Fix it by stopping the L2 table scan when the next device id is
      outside of the current table, allowing the scan to continue from
      the next L1 table entry.
      
      Thanks to Eric Auger for the fix suggestion.
      
      Fixes: 920a7a8f ("KVM: arm64: vgic-its: Add infrastructure for tableookup")
      Suggested-by: default avatarEric Auger <eric.auger@redhat.com>
      Signed-off-by: default avatarEric Ren <renzhengeek@gmail.com>
      [maz: commit message tidy-up]
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: stable@vger.kernel.org
      Link: https://lore.kernel.org/r/d9c3a564af9e2c5bf63f48a7dcbf08cd593c5c0b.1665802985.git.renzhengeek@gmail.com
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      62ab3d75
    • Kai-Heng Feng's avatar
      ata: ahci: Match EM_MAX_SLOTS with SATA_PMP_MAX_PORTS · da2ea4a9
      Kai-Heng Feng authored
      commit 1e41e693 upstream.
      
      UBSAN complains about array-index-out-of-bounds:
      [ 1.980703] kernel: UBSAN: array-index-out-of-bounds in /build/linux-9H675w/linux-5.15.0/drivers/ata/libahci.c:968:41
      [ 1.980709] kernel: index 15 is out of range for type 'ahci_em_priv [8]'
      [ 1.980713] kernel: CPU: 0 PID: 209 Comm: scsi_eh_8 Not tainted 5.15.0-25-generic #25-Ubuntu
      [ 1.980716] kernel: Hardware name: System manufacturer System Product Name/P5Q3, BIOS 1102 06/11/2010
      [ 1.980718] kernel: Call Trace:
      [ 1.980721] kernel: <TASK>
      [ 1.980723] kernel: show_stack+0x52/0x58
      [ 1.980729] kernel: dump_stack_lvl+0x4a/0x5f
      [ 1.980734] kernel: dump_stack+0x10/0x12
      [ 1.980736] kernel: ubsan_epilogue+0x9/0x45
      [ 1.980739] kernel: __ubsan_handle_out_of_bounds.cold+0x44/0x49
      [ 1.980742] kernel: ahci_qc_issue+0x166/0x170 [libahci]
      [ 1.980748] kernel: ata_qc_issue+0x135/0x240
      [ 1.980752] kernel: ata_exec_internal_sg+0x2c4/0x580
      [ 1.980754] kernel: ? vprintk_default+0x1d/0x20
      [ 1.980759] kernel: ata_exec_internal+0x67/0xa0
      [ 1.980762] kernel: sata_pmp_read+0x8d/0xc0
      [ 1.980765] kernel: sata_pmp_read_gscr+0x3c/0x90
      [ 1.980768] kernel: sata_pmp_attach+0x8b/0x310
      [ 1.980771] kernel: ata_eh_revalidate_and_attach+0x28c/0x4b0
      [ 1.980775] kernel: ata_eh_recover+0x6b6/0xb30
      [ 1.980778] kernel: ? ahci_do_hardreset+0x180/0x180 [libahci]
      [ 1.980783] kernel: ? ahci_stop_engine+0xb0/0xb0 [libahci]
      [ 1.980787] kernel: ? ahci_do_softreset+0x290/0x290 [libahci]
      [ 1.980792] kernel: ? trace_event_raw_event_ata_eh_link_autopsy_qc+0xe0/0xe0
      [ 1.980795] kernel: sata_pmp_eh_recover.isra.0+0x214/0x560
      [ 1.980799] kernel: sata_pmp_error_handler+0x23/0x40
      [ 1.980802] kernel: ahci_error_handler+0x43/0x80 [libahci]
      [ 1.980806] kernel: ata_scsi_port_error_handler+0x2b1/0x600
      [ 1.980810] kernel: ata_scsi_error+0x9c/0xd0
      [ 1.980813] kernel: scsi_error_handler+0xa1/0x180
      [ 1.980817] kernel: ? scsi_unjam_host+0x1c0/0x1c0
      [ 1.980820] kernel: kthread+0x12a/0x150
      [ 1.980823] kernel: ? set_kthread_struct+0x50/0x50
      [ 1.980826] kernel: ret_from_fork+0x22/0x30
      [ 1.980831] kernel: </TASK>
      
      This happens because sata_pmp_init_links() initialize link->pmp up to
      SATA_PMP_MAX_PORTS while em_priv is declared as 8 elements array.
      
      I can't find the maximum Enclosure Management ports specified in AHCI
      spec v1.3.1, but "12.2.1 LED message type" states that "Port Multiplier
      Information" can utilize 4 bits, which implies it can support up to 16
      ports. Hence, use SATA_PMP_MAX_PORTS as EM_MAX_SLOTS to resolve the
      issue.
      
      BugLink: https://bugs.launchpad.net/bugs/1970074
      
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarKai-Heng Feng <kai.heng.feng@canonical.com>
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@opensource.wdc.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      da2ea4a9
    • Alexander Stein's avatar
      ata: ahci-imx: Fix MODULE_ALIAS · 5e7bff83
      Alexander Stein authored
      
      commit 979556f1 upstream.
      
      'ahci:' is an invalid prefix, preventing the module from autoloading.
      Fix this by using the 'platform:' prefix and DRV_NAME.
      
      Fixes: 9e54eae2 ("ahci_imx: add ahci sata support on imx platforms")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarAlexander Stein <alexander.stein@ew.tq-group.com>
      Reviewed-by: default avatarFabio Estevam <festevam@gmail.com>
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@opensource.wdc.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5e7bff83
    • Borislav Petkov's avatar
      x86/microcode/AMD: Apply the patch early on every logical thread · 12a634b6
      Borislav Petkov authored
      
      commit e7ad18d1 upstream.
      
      Currently, the patch application logic checks whether the revision
      needs to be applied on each logical CPU (SMT thread). Therefore, on SMT
      designs where the microcode engine is shared between the two threads,
      the application happens only on one of them as that is enough to update
      the shared microcode engine.
      
      However, there are microcode patches which do per-thread modification,
      see Link tag below.
      
      Therefore, drop the revision check and try applying on each thread. This
      is what the BIOS does too so this method is very much tested.
      
      Btw, change only the early paths. On the late loading paths, there's no
      point in doing per-thread modification because if is it some case like
      in the bugzilla below - removing a CPUID flag - the kernel cannot go and
      un-use features it has detected are there early. For that, one should
      use early loading anyway.
      
        [ bp: Fixes does not contain the oldest commit which did check for
          equality but that is good enough. ]
      
      Fixes: 8801b3fc ("x86/microcode/AMD: Rework container parsing")
      Reported-by: default avatarȘtefan Talpalaru <stefantalpalaru@yahoo.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Tested-by: default avatarȘtefan Talpalaru <stefantalpalaru@yahoo.com>
      Cc: <stable@vger.kernel.org>
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=216211
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      12a634b6
    • Joseph Qi's avatar
      ocfs2: fix BUG when iput after ocfs2_mknod fails · 853317a5
      Joseph Qi authored
      commit 759a7c61 upstream.
      
      Commit b1529a41 "ocfs2: should reclaim the inode if
      '__ocfs2_mknod_locked' returns an error" tried to reclaim the claimed
      inode if __ocfs2_mknod_locked() fails later.  But this introduce a race,
      the freed bit may be reused immediately by another thread, which will
      update dinode, e.g.  i_generation.  Then iput this inode will lead to BUG:
      inode->i_generation != le32_to_cpu(fe->i_generation)
      
      We could make this inode as bad, but we did want to do operations like
      wipe in some cases.  Since the claimed inode bit can only affect that an
      dinode is missing and will return back after fsck, it seems not a big
      problem.  So just leave it as is by revert the reclaim logic.
      
      Link: https://lkml.kernel.org/r/20221017130227.234480-1-joseph.qi@linux.alibaba.com
      
      
      Fixes: b1529a41 ("ocfs2: should reclaim the inode if '__ocfs2_mknod_locked' returns an error")
      Signed-off-by: default avatarJoseph Qi <joseph.qi@linux.alibaba.com>
      Reported-by: default avatarYan Wang <wangyan122@huawei.com>
      Cc: Mark Fasheh <mark@fasheh.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Cc: Changwei Ge <gechangwei@live.cn>
      Cc: Gang He <ghe@suse.com>
      Cc: Jun Piao <piaojun@huawei.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      853317a5
    • Joseph Qi's avatar
      ocfs2: clear dinode links count in case of error · 0f6097b6
      Joseph Qi authored
      commit 28f4821b upstream.
      
      In ocfs2_mknod(), if error occurs after dinode successfully allocated,
      ocfs2 i_links_count will not be 0.
      
      So even though we clear inode i_nlink before iput in error handling, it
      still won't wipe inode since we'll refresh inode from dinode during inode
      lock.  So just like clear inode i_nlink, we clear ocfs2 i_links_count as
      well.  Also do the same change for ocfs2_symlink().
      
      Link: https://lkml.kernel.org/r/20221017130227.234480-2-joseph.qi@linux.alibaba.com
      
      
      Signed-off-by: default avatarJoseph Qi <joseph.qi@linux.alibaba.com>
      Reported-by: default avatarYan Wang <wangyan122@huawei.com>
      Cc: Mark Fasheh <mark@fasheh.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Cc: Changwei Ge <gechangwei@live.cn>
      Cc: Gang He <ghe@suse.com>
      Cc: Jun Piao <piaojun@huawei.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0f6097b6
  2. Nov 01, 2022
Loading