- Mar 18, 2025
-
-
Chunhai Guo authored
This patch aims to allocate bvpages and short-lived compressed pages from the reserved pool first. After applying this patch, there are three benefits. 1. It reduces the page allocation time. The bvpages and short-lived compressed pages account for about 4% of the pages allocated from the system in the multi-app launch benchmarks [1]. It reduces the page allocation time accordingly and lowers the likelihood of blockage by page allocation in low memory scenarios. 2. The pages in the reserved pool will be allocated on demand. Currently, bvpages and short-lived compressed pages are short-lived pages allocated from the system, and the pages in the reserved pool all originate from short-lived pages. Consequently, the number of reserved pool pages will increase to z_erofs_rsv_nrpages over time. With this patch, all short-lived pages are allocated from the reserved pool first, so the number of reserved pool pages will only increase when there are not enough pages. Thus, even if z_erofs_rsv_nrpages is set to a large number for specific reasons, the actual number of reserved pool pages may remain low as per demand. In the multi-app launch benchmarks [1], z_erofs_rsv_nrpages is set at 256, while the number of reserved pool pages remains below 64. 3. When erofs cache decompression is disabled (EROFS_ZIP_CACHE_DISABLED), all pages will *only* be allocated from the reserved pool for erofs. This will significantly reduce the memory pressure from erofs. [1] For additional details on the multi-app launch benchmarks, please refer to commit 0f6273ab ("erofs: add a reserved buffer pool for lz4 decompression"). Signed-off-by:
Chunhai Guo <guochunhai@vivo.com> Reviewed-by:
Gao Xiang <hsiangkao@linux.alibaba.com> Reviewed-by:
Chao Yu <chao@kernel.org> Link: https://lore.kernel.org/r/20240906121110.3701889-1-guochunhai@vivo.com Signed-off-by:
Gao Xiang <hsiangkao@linux.alibaba.com> Bug: 387202250 Bug: 404427448 Change-Id: Ife45adcb4c22c9d73952db1de956e1b9cda1b8c2 (cherry picked from commit 79f504a2) Signed-off-by:
liujinbao1 <liujinbao1@xiaomi.corp-partner.google.com> (cherry picked from commit 6e7af99d)
-
- Mar 11, 2025
-
-
Kaiqian Zhu authored
Since commit 3a5a6d0c("cpuset: don't nest cgroup_mutex inside get_online_cpus()"), cpuset hotplug was done asynchronously via a work function. This is to avoid recursive locking of cgroup_mutex. Since then, the cgroup locking scheme has changed quite a bit. A cpuset_mutex was introduced to protect cpuset specific operations. The cpuset_mutex is then replaced by a cpuset_rwsem. With commit d74b27d6 ("cgroup/cpuset: Change cpuset_rwsem and hotplug lock order"), cpu_hotplug_lock is acquired before cpuset_rwsem. Later on, cpuset_rwsem is reverted back to cpuset_mutex. All these locking changes allow the hotplug code to call into cpuset core directly. The following commits were also merged due to the asynchronous nature of cpuset hotplug processing. - commit b22afcdf ("cpu/hotplug: Cure the cpusets trainwreck") - commit 50e76632 ("sched/cpuset/pm: Fix cpuset vs. suspend-resume bugs") - commit 28b89b9e ("cpuset: handle race between CPU hotplug and cpuset_hotplug_work") Clean up all these bandages by making cpuset hotplug processing synchronous again with the exception that the call to cgroup_transfer_tasks() to transfer tasks out of an empty cgroup v1 cpuset, if necessary, will still be done via a work function due to the existing cgroup_mutex -> cpu_hotplug_lock dependency. It is possible to reverse that dependency, but that will require updating a number of different cgroup controllers. This special hotplug code path should be rarely taken anyway. As all the cpuset states will be updated by the end of the hotplug operation, we can revert most the above commits except commit 50e76632 ("sched/cpuset/pm: Fix cpuset vs. suspend-resume bugs") which is partially reverted. Also removing some cpus_read_lock trylock attempts in the cpuset partition code as they are no longer necessary since the cpu_hotplug_lock is now held for the whole duration of the cpuset hotplug code path. Signed-off-by:
Waiman Long <longman@redhat.com> Tested-by:
Valentin Schneider <vschneid@redhat.com> Signed-off-by:
Tejun Heo <tj@kernel.org> Bug: 401393559 Bug: 402078031 (cherry picked from commit 2125c003 https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master) [kaiqian: Removed all the cpus_read_trylock() related functions introduced in later cpuset updates] Change-Id: I252e24629388a0be5746ac4cc3f475ea6767a462 Signed-off-by:
Zhu Kaiqian <zhukaiqian@xiaomi.com>
-
- Mar 06, 2025
-
-
pengzhongcui authored
2 variable symbol(s) added 'struct tracepoint __tracepoint_android_vh_tune_swappiness' 'struct tracepoint __tracepoint_android_vh_shrink_slab_async' Bug: 399777353 Bug: 400818928 Bug: 401099893 Change-Id: If3fb7fa00349160e5b939b53208725396237c999 Signed-off-by:
pengzhongcui <pengzhongcui@xiaomi.corp-partner.google.com> (cherry picked from commit 8f602e19)
-
pengzhongcui authored
one Vendor hook add: android_vh_do_shrink_slab_ex Add vendor hook point in do_shrink_slab to optimize for user experience related threads and time-consuming shrinkers. Bug: 399777353 Bug: 400818928 Bug: 401099893 Change-Id: I63778c73f76930fe27869e33ba6cdb97d50cf543 Signed-off-by:
pengzhongcui <pengzhongcui@xiaomi.corp-partner.google.com> (cherry picked from commit 05ab4ba8)
-
xiaoxiang.xiong authored
74 function symbol(s) added 'u64 __blkg_prfill_rwstat(struct seq_file*, struct blkg_policy_data*, const struct blkg_rwstat_sample*)' 'int __percpu_counter_init_many(struct percpu_counter*, s64, gfp_t, u32, struct lock_class_key*)' 's64 __percpu_counter_sum(struct percpu_counter*)' 'int _atomic_dec_and_lock_irqsave(atomic_t*, spinlock_t*, unsigned long*)' 'void add_disk_randomness(struct gendisk*)' 'ssize_t badblocks_show(struct badblocks*, char*, int)' 'void bdev_end_io_acct(struct block_device*, enum req_op, unsigned int, unsigned long)' 'unsigned long bdev_start_io_acct(struct block_device*, enum req_op, unsigned long)' 'const char* bdi_dev_name(struct backing_dev_info*)' 'void bio_associate_blkg_from_css(struct bio*, struct cgroup_subsys_state*)' 'struct bio* bio_split(struct bio*, int, gfp_t, struct bio_set*)' 'void bio_uninit(struct bio*)' 'struct gendisk* blk_mq_alloc_disk_for_queue(struct request_queue*, struct lock_class_key*)' 'void blk_queue_required_elevator_features(struct request_queue*, unsigned int)' 'void blkcg_print_blkgs(struct seq_file*, struct blkcg*, u64(*)(struct seq_file*, struct blkg_policy_data*, int), const struct blkcg_policy*, int, bool)' 'int blkg_conf_prep(struct blkcg*, const struct blkcg_policy*, struct blkg_conf_ctx*)' 'u64 blkg_prfill_rwstat(struct seq_file*, struct blkg_policy_data*, int)' 'void blkg_rwstat_exit(struct blkg_rwstat*)' 'int blkg_rwstat_init(struct blkg_rwstat*, gfp_t)' 'void blkg_rwstat_recursive_sum(struct blkcg_gq*, struct blkcg_policy*, int, struct blkg_rwstat_sample*)' 'enum scsi_pr_type block_pr_type_to_scsi(enum pr_type)' 'int block_read_full_folio(struct folio*, get_block_t*)' 'struct bsg_device* bsg_register_queue(struct request_queue*, struct device*, const char*, bsg_sg_io_fn*)' 'void bsg_unregister_queue(struct bsg_device*)' 'void call_rcu_hurry(struct callback_head*, rcu_callback_t)' 'unsigned long clock_t_to_jiffies(unsigned long)' 'int devcgroup_check_permission(short, u32, u32, short)' 'bool disk_check_media_change(struct gendisk*)' 'struct device_driver* driver_find(const char*, const struct bus_type*)' 'blk_status_t errno_to_blk_status(int)' 'bool folio_mark_dirty(struct folio*)' 'struct cpumask* group_cpus_evenly(unsigned int)' 'struct io_cq* ioc_find_get_icq(struct request_queue*)' 'struct io_cq* ioc_lookup_icq(struct request_queue*)' 'void* kmem_cache_alloc_node(struct kmem_cache*, gfp_t, int)' 'void* mempool_alloc_pages(gfp_t, void*)' 'void mempool_free_pages(void*, void*)' 'unsigned int mmc_calc_max_discard(struct mmc_card*)' 'int mmc_card_alternative_gpt_sector(struct mmc_card*, sector_t*)' 'int mmc_cqe_recovery(struct mmc_host*)' 'int mmc_cqe_start_req(struct mmc_host*, struct mmc_request*)' 'void mmc_crypto_prepare_req(struct mmc_queue_req*)' 'int mmc_detect_card_removed(struct mmc_host*)' 'int mmc_erase(struct mmc_card*, unsigned int, unsigned int, unsigned int)' 'int mmc_poll_for_busy(struct mmc_card*, unsigned int, bool, enum mmc_busy_cmd)' 'int mmc_register_driver(struct mmc_driver*)' 'void mmc_retune_pause(struct mmc_host*)' 'void mmc_retune_unpause(struct mmc_host*)' 'void mmc_run_bkops(struct mmc_card*)' 'int mmc_sanitize(struct mmc_card*, unsigned int)' 'int mmc_start_request(struct mmc_host*, struct mmc_request*)' 'void mmc_unregister_driver(struct mmc_driver*)' 'void percpu_counter_destroy_many(struct percpu_counter*, u32)' 'bool percpu_ref_is_zero(struct percpu_ref*)' 'void percpu_ref_kill_and_confirm(struct percpu_ref*, percpu_ref_func_t*)' 'void percpu_ref_resurrect(struct percpu_ref*)' 'void percpu_ref_switch_to_atomic_sync(struct percpu_ref*)' 'void percpu_ref_switch_to_percpu(struct percpu_ref*)' 'void put_io_context(struct io_context*)' 'int radix_tree_preload(gfp_t)' 'struct folio* read_cache_folio(struct address_space*, unsigned long, filler_t*, struct file*)' 'enum scsi_disposition scsi_check_sense(struct scsi_cmnd*)' 'void scsi_eh_finish_cmd(struct scsi_cmnd*, struct list_head*)' 'enum pr_type scsi_pr_type_to_block(enum scsi_pr_type)' 'int scsi_rescan_device(struct scsi_device*)' 'const u8* scsi_sense_desc_find(const u8*, int, int)' 'void sdev_evt_send_simple(struct scsi_device*, enum scsi_device_event, gfp_t)' 'int thaw_super(struct super_block*, enum freeze_holder)' 'void trace_seq_puts(struct trace_seq*, const char*)' 'int transport_add_device(struct device*)' 'void transport_configure_device(struct device*)' 'void transport_destroy_device(struct device*)' 'void transport_remove_device(struct device*)' 'void transport_setup_device(struct device*)' 2 variable symbol(s) added 'struct cgroup_subsys io_cgrp_subsys' 'struct static_key_true io_cgrp_subsys_on_dfl_key' Bug: 400475995 Bug: 401190798 Change-Id: I959e7f45641df674096da689089096bd14e4ed65 Signed-off-by:
xiaoxiang.xiong <xiaoxiang.xiong@transsion.com> (cherry picked from commit ca0752ee)
-
- Mar 05, 2025
-
-
weipengliang authored
38 function symbol(s) added 'unsigned long __alloc_pages_bulk(gfp_t, int, nodemask_t*, int, struct list_head*, struct page**)' 'int __hwspin_trylock(struct hwspinlock*, int, unsigned long*)' 'int __traceiter_android_vh_freq_table_limits(void*, struct cpufreq_policy*, unsigned int, unsigned int)' 'int __traceiter_cma_alloc_busy_retry(void*, const char*, unsigned long, const struct page*, unsigned long, unsigned int)' 'int __traceiter_cma_alloc_finish(void*, const char*, unsigned long, const struct page*, unsigned long, unsigned int, int)' 'int __traceiter_cma_alloc_start(void*, const char*, unsigned long, unsigned int)' 'int __traceiter_cma_release(void*, const char*, unsigned long, const struct page*, unsigned long)' 'void arch_wb_cache_pmem(void*, size_t)' 'int blk_crypto_init_key(struct blk_crypto_key*, const u8*, size_t, enum blk_crypto_key_type, enum blk_crypto_mode_num, unsigned int, unsigned int)' 'int blk_crypto_start_using_key(struct block_device*, const struct blk_crypto_key*)' 'unsigned int cpumask_any_distribute(const struct cpumask*)' 'void devfreq_get_freq_range(struct devfreq*, unsigned long*, unsigned long*)' 'int device_property_read_u64_array(const struct device*, const char*, u64*, size_t)' 'int devm_rproc_add(struct device*, struct rproc*)' 'struct rproc* devm_rproc_alloc(struct device*, const char*, const struct rproc_ops*, const char*, int)' 'int dma_fence_signal_timestamp(struct dma_fence*, ktime_t)' 'int dw_pcie_link_up(struct dw_pcie*)' 'int fwnode_irq_get(const struct fwnode_handle*, unsigned int)' 'int gether_get_host_addr_cdc(struct net_device*, char*, int)' 'int gpio_request_array(const struct gpio*, size_t)' 'int hwspin_lock_get_id(struct hwspinlock*)' 'struct device_node* of_get_next_cpu_node(struct device_node*)' 'const char* pci_speed_string(enum pci_bus_speed)' 'int pcie_capability_write_word(struct pci_dev*, int, u16)' 'struct pinctrl_gpio_range* pinctrl_find_gpio_range_from_pin(struct pinctrl_dev*, unsigned int)' 'int probe_irq_off(unsigned long)' 'unsigned long probe_irq_on()' 'int proc_do_large_bitmap(struct ctl_table*, int, void*, size_t*, loff_t*)' 'struct pwm_device* pwm_request_from_chip(struct pwm_chip*, unsigned int, const char*)' 'struct sys_off_handler* register_sys_off_handler(enum sys_off_mode, int, int(*)(struct sys_off_data*), void*)' 'int rproc_detach(struct rproc*)' 'void* snd_usb_find_csint_desc(void*, int, void*, u8)' 'const struct audioformat* snd_usb_find_format(struct list_head*, snd_pcm_format_t, unsigned int, unsigned int, bool, struct snd_usb_substream*)' 'depot_stack_handle_t stack_depot_save(unsigned long*, unsigned int, gfp_t)' 'void tcp_get_info(struct sock*, struct tcp_info*)' 'void uart_xchar_out(struct uart_port*, int)' 'int usb_pipe_type_check(struct usb_device*, unsigned int)' 'const char* usb_state_string(enum usb_device_state)' 5 variable symbol(s) added 'struct tracepoint __tracepoint_android_vh_freq_table_limits' 'struct tracepoint __tracepoint_cma_alloc_busy_retry' 'struct tracepoint __tracepoint_cma_alloc_finish' 'struct tracepoint __tracepoint_cma_alloc_start' 'struct tracepoint __tracepoint_cma_release' Bug: 395131250 Bug: 400566736 Change-Id: Idab764db85e4711cbcf544ef4268a3e8b7d6dd41 Signed-off-by:
weipengliang <weipengliang@xiaomi.com> (cherry picked from commit fcbb7926)
-
- Mar 04, 2025
-
-
yipeng xiang authored
compact White list the __traceiter_android_vh_proactive_compact_wmark_high. 1 function symbol(s) added 'int __traceiter_android_vh_proactive_compact_wmark_high(void*, int*)' 1 variable symbol(s) added 'struct tracepoint __tracepoint_android_vh_proactive_compact_wmark_high' Bug: 399269938 Bug: 391491611 Change-Id: I06fd42b8725ca5e708fa1f113871d8e0e9a6f744 Signed-off-by:
yipeng xiang <yipengxiang@honor.corp-partner.google.com> (cherry picked from commit a47cc5b9)
-
yipeng xiang authored
Add vendor hook to bypass compact_node if fragmentation score is tiny Bug: 391491611 Bug: 399269938 Change-Id: I66df2bce09e08137e4812468e024fa24fcb97259 Signed-off-by:
yipeng xiang <yipengxiang@honor.corp-partner.google.com> (cherry picked from commit 891189b6) (cherry picked from commit 5b45df2d)
-
yipeng xiang authored
1 function symbol(s) added 'bool isolate_folio(struct lruvec*, struct folio*, struct scan_control*)' Bug: 399269938 Bug: 390332073 Change-Id: I69c8eb97661f465fab0c1cd46fbd9cab64fb75f8 Signed-off-by:
yipeng xiang <yipengxiang@honor.corp-partner.google.com> (cherry picked from commit fc463bdf)
-
yipeng xiang authored
export isolate_folio and reclaim_pages to support to reclaim pages in ko Bug: 399269938 Bug: 390332073 Change-Id: Ib224548baed1217ef96cd3974775c8dc65e77a50 Signed-off-by:
yipeng xiang <yipengxiang@honor.corp-partner.google.com> (cherry picked from commit 0e47a739) (cherry picked from commit b4c5ce4f)
-
Rui Chen authored
24 function symbol(s) added 'int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data)' 'void mempool_exit(mempool_t *pool)' 'int dm_register_target(struct target_type *tt)' 'void dm_unregister_target(struct target_type *tt)' 'int __ref dm_get_device(struct dm_target *ti, const char *path, blk_mode_t mode, struct dm_dev **result)' 'void dm_put_device(struct dm_target *ti, struct dm_dev *d)' 'int dm_set_target_max_io_len(struct dm_target *ti, sector_t len)' 'unsigned int dm_bio_get_target_bio_nr(const struct bio *bio)' 'const char *dm_table_device_name(struct dm_table *t)' 'void dm_table_event(struct dm_table *t)' 'const char *dm_shift_arg(struct dm_arg_set *as)' 'int dm_read_arg_group(const struct dm_arg *arg, struct dm_arg_set *arg_set, unsigned int *value, char **error)' 'void dm_consume_args(struct dm_arg_set *as, unsigned int num_args)' 'void *dm_per_bio_data(struct bio *bio, size_t data_size)' 'void dm_submit_bio_remap(struct bio *clone, struct bio *tgt_clone)' 'unsigned int dm_get_reserved_bio_based_ios(void)' 'int bioset_init(struct bio_set *bs, unsigned int pool_size, unsigned int front_pad, int flags)' 'void bioset_exit(struct bio_set *bs)' 'void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key, const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE], gfp_t gfp_mask)' 'void blk_crypto_evict_key(struct block_device *bdev, const struct blk_crypto_key *key)' 'int blk_crypto_derive_sw_secret(struct block_device *bdev, const u8 *eph_key, size_t eph_key_size, u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE])' 'void __sched wait_for_completion_io(struct completion *x)' 'void zero_fill_bio_iter(struct bio *bio, struct bvec_iter start)' 'int __trace_bputs(unsigned long ip, const char *str)' 1 value symbol added 'struct page *empty_zero_page;' Bug: 399269938 Bug: 391513201 Change-Id: I73a25a03489af27392fb04ffe6a83f984c6ae850 Signed-off-by:
Rui Chen <chenrui9@honor.com> (cherry picked from commit b49cbb85) Signed-off-by:
jiangxinpei <jiangxinpei@honor.corp-partner.google.com>
-
Rui Chen authored
3 function symbol(s) added 'int __traceiter_android_vh_io_statistics(void*, struct address_space*, unsigned int, unsigned int, bool, bool)' 'void percpu_ref_exit(struct percpu_ref*)' 'int percpu_ref_init(struct percpu_ref*, percpu_ref_func_t*, unsigned int, gfp_t)' 1 variable symbol(s) added 'struct tracepoint __tracepoint_android_vh_io_statistics' Bug: 399269938 Bug: 380502059 Change-Id: Ieb313ab6ebb1fcb411a7a519444743637957ff75 Signed-off-by:
Rui Chen <chenrui9@honor.com> (cherry picked from commit 5b820e14) Signed-off-by:
jiangxinpei <jiangxinpei@honor.corp-partner.google.com>
-
Rui Chen authored
Add vendor hook to get metainfo of direct/buffered read and write. Determine hot files in each performance-sensitive user scenario. Bug: 399269938 Bug: 380502059 Change-Id: Ie7604852df637d6664afd72e87bd6d4b14bbc2a2 Signed-off-by:
Rui Chen <chenrui9@honor.com> (cherry picked from commit affce30e) Signed-off-by:
jiangxinpei <jiangxinpei@honor.corp-partner.google.com>
-
4 function symbol(s) added 'int __traceiter_rpm_idle(void*, struct device *dev, int flags)' 'int __traceiter_rpm_suspend(void*, struct device *dev, int flags)' 'int __traceiter_rpm_resume(void*, struct device *dev, int flags)' 'int __traceiter_rpm_return_int(void*, struct device *dev, unsigned long ip, int ret)' 4 variable symbol(s) added 'struct tracepoint __tracepoint_rpm_idle' 'struct tracepoint __tracepoint_rpm_suspend' 'struct tracepoint __tracepoint_rpm_resume' 'struct tracepoint __tracepoint_rpm_return_int' Bug: 399269938 Bug: 384649917 Change-Id: I4f5defc1e915aafb67f0cb1588774cbf9e466ff2 Signed-off-by:
liulu liu <liulu.liu@honor.corp-partner.google.com> (cherry picked from commit 46493cec) Signed-off-by:
jiangxinpei <jiangxinpei@honor.corp-partner.google.com>
-
wei li authored
1 function symbol(s) removed 'int __traceiter_android_vh_mutex_unlock_slowpath_before_wakeq(void*, struct mutex*)' 1 variable symbol(s) removed 'struct tracepoint __tracepoint_android_vh_mutex_unlock_slowpath_before_wakeq' 1 function symbol(s) added 'int __traceiter_android_vh_mutex_unlock_slowpath_bf_wakeq(void*, struct mutex*)' 1 variable symbol(s) added 'struct tracepoint __tracepoint_android_vh_mutex_unlock_slowpath_bf_wakeq' Bug: 399269938 Bug: 381511799 Change-Id: I92095503aae41aadd5b3e1208cd8ff3221a532c1 Signed-off-by:
wei li <sirius.liwei@honor.corp-partner.google.com> (cherry picked from commit e045e3c1) Signed-off-by:
jiangxinpei <jiangxinpei@honor.corp-partner.google.com>
-
wei li authored
When we use this KMI(__tracepoint_android_vh_mutex_unlock_slowpath_before_wakeq) in vendor modules, it reports modpost error: too long symbol. So, shorten it as __tracepoint_android_vh_mutex_unlock_slowpath_bf_wakeq. Fixes: 9f6bd037 ("ANDROID: vendor_hooks: add hook in __mutex_unlock_slowpath()") Bug: 399269938 Bug: 381511799 Change-Id: I1a74ea3433dd2d9e234f56ea81a7c1020f4a56bb Signed-off-by:
wei li <sirius.liwei@honor.corp-partner.google.com> (cherry picked from commit e0a26bcd) Signed-off-by:
jiangxinpei <jiangxinpei@honor.corp-partner.google.com>
-
Chenghao Zhao authored
2 function symbol(s) added 'int __traceiter_android_vh_tcp_rcv_established_fast_path(void*, struct sock*)' 'int __traceiter_android_vh_tcp_rcv_established_slow_path(void*, struct sock*)' 2 variable symbol(s) added 'struct tracepoint __tracepoint_android_vh_tcp_rcv_established_fast_path' 'struct tracepoint __tracepoint_android_vh_tcp_rcv_established_slow_path' Bug: 399269938 Bug: 378600969 Change-Id: I380ffe2db724916c4336f5af0ada5665e940bebf Signed-off-by:
Chenghao Zhao <zhaochenghao@honor.com> (cherry picked from commit 05869b82) Signed-off-by:
jiangxinpei <jiangxinpei@honor.corp-partner.google.com>
-
Chenghao Zhao authored
1.android_vh_tcp_rcv_established_fast_path Check if there are received packets out of order for the fast path. 2.android_vh_tcp_rcv_established_slow_path Check if there are received packets out of order for the slow path. Bug: 399269938 Bug: 378600969 Change-Id: Ifad16bd9523ac0c3cc0c0c98dfb0884635f3a537 Signed-off-by:
Chenghao Zhao <zhaochenghao@honor.com> (cherry picked from commit c6058890) Signed-off-by:
jiangxinpei <jiangxinpei@honor.corp-partner.google.com>
-
jiangxinpei authored
2 function symbol(s) added 'void* android_debug_per_cpu_symbol(enum android_debug_per_cpu_symbol)' 'void* android_debug_symbol(enum android_debug_symbol)' Bug: 399269938 Bug: 377608220 Bug: 287890135 Signed-off-by:
xinpei jiang <jiangxinpei@honor.com> Signed-off-by:
Xuewen Yan <xuewen.yan@unisoc.com> Change-Id: I144dab4b100f38603b507a326c84a9c7a26af7c3 (cherry picked from commit 7959dfaf) Signed-off-by:
jiangxinpei <jiangxinpei@honor.corp-partner.google.com>
-
- Mar 03, 2025
-
-
changyan1 authored
1 function symbol(s) added 'int __traceiter_android_vh_customize_pmd_gfp_bypass(void*, gfp_t*, bool*)' 1 variable symbol(s) added 'struct tracepoint __tracepoint_android_vh_customize_pmd_gfp_bypass' Bug: 359422098 Bug: 399794577 Change-Id: I9d0d0ca22fa36fdb4bb659c7ef40cc5adf072787 Signed-off-by:
yan chang <changyan1@xiaomi.com>
-
hutingxian authored
2 function symbol(s) added 'int scsi_device_set_state(struct scsi_device*, enum scsi_device_state)' Bug: 395549038 Bug: 400349723 Change-Id: Ifa1b74740ce72f24826e24c1224ceb23860240cb Signed-off-by:
hutingxian <hutingxian@xiaomi.corp-partner.google.com> (cherry picked from commit fab059ac)
-
yan chang authored
This hook is used as a supplement to the previous hook to dynamic modify the gfp and bypass of order pmd: https://android-review.googlesource.com/c/kernel/common/+/3227165 Bug: 359422098 Bug: 399794577 Change-Id: Ia33fdc62ab466cf5cfcc53d02b515e28a1ac1431 Signed-off-by:
yan chang <changyan1@xiaomi.com> (cherry picked from commit 1115b66a)
-
Penghao Wei authored
We need to know which module allocates/releases each cma. The configuration of CMA resources in the system is fixed, and multiple modules competing for limited CMA resources at the same time may lead to allocation failures and more serious problems. Bug: 400345217 Bug: 381824582 Change-Id: Id43beb2f36f10fdcf852ef653c6a6f2af2a1e760 Signed-off-by:
Penghao Wei <weipenghao@xiaomi.com> (cherry picked from commit 6b8e1dfc) Signed-off-by:
yan Chang <changyan1@xiaomi.com>
-
Paul Moore authored
Move our existing input sanity checking to the top of sel_write_load() and add a check to ensure the buffer size is non-zero. Move a local variable initialization from the declaration to before it is used. Minor style adjustments. Reported-by:
Sam Sun <samsun1006219@gmail.com> Signed-off-by:
Paul Moore <paul@paul-moore.com> Bug: 400329784 Bug: 386755977 Change-Id: I76ec20258a8ef8a2966e98d523b58a0aa8b49bda (cherry picked from commit 42c77323) Signed-off-by:
yaozhongmin <yaozhongmin@xiaomi.com> (cherry picked from commit 37c8296c)
-
chunpeng li authored
1 function symbol(s) added 'int __traceiter_android_vh_dma_buf_release(void*, struct dma_buf*)' 1 variable symbol(s) added 'struct tracepoint __tracepoint_android_vh_dma_buf_release' Bug: 229552121 Bug: 382630460 Bug: 400393157 Change-Id: Ie42fb585e35ee1996993f466fd698c18b268a8e4 Signed-off-by:
chunpeng li <lichunpeng@xiaomi.com> (cherry picked from commit 13ce3023) Signed-off-by:
yan Chang <changyan1@xiaomi.com>
-
Gaofeng Sheng authored
The main function is to check whether the corresponding IOMMU buffer is unmapped when dmabuf is released, so as to avoid memory coverage caused by IOMMU. Bug: 229552121 Bug: 382630460 Bug: 400393157 Change-Id: I0e6d5ec47c9449bcbfe6b237bd9dea63f11e677e Signed-off-by:
Gaofeng Sheng <gaofeng.sheng@unisoc.com> Signed-off-by:
chunpeng li <lichunpeng@xiaomi.com> (cherry picked from commit 88796505) (cherry picked from commit c9da352d) Signed-off-by:
yan Chang <changyan1@xiaomi.com>
-
- Mar 01, 2025
-
-
Qi Han authored
UPSTREAM: f2fs: modify f2fs_is_checkpoint_ready logic to allow more data to be written with the CP disable When the free segment is used up during CP disable, many write or ioctl operations will get ENOSPC error codes, even if there are still many blocks available. We can reproduce it in the following steps: dd if=/dev/zero of=f2fs.img bs=1M count=65 mkfs.f2fs -f f2fs.img mount f2fs.img f2fs_dir -o checkpoint=disable:10% cd f2fs_dir i=1 ; while [[ $i -lt 50 ]] ; do (file_name=./2M_file$i ; dd \ if=/dev/random of=$file_name bs=1M count=2); i=$((i+1)); done sync i=1 ; while [[ $i -lt 50 ]] ; do (file_name=./2M_file$i ; truncate \ -s 1K $file_name); i=$((i+1)); done sync dd if=/dev/zero of=./file bs=1M count=20 In f2fs_need_SSR() function, it is allowed to use SSR to allocate blocks when CP is disabled, so in f2fs_is_checkpoint_ready function, can we judge the number of invalid blocks when free segment is not enough, and return ENOSPC only if the number of invalid blocks is also not enough. Change-Id: I96cec738b6b4da05c76132e7b6c71ff9c4c63daf Signed-off-by:
Qi Han <hanqi@vivo.com> Reviewed-by:
Chao Yu <chao@kernel.org> Signed-off-by:
Jaegeuk Kim <jaegeuk@kernel.org> (cherry picked from commit 84b5bb8b) (cherry picked from commit c3fe4328) Bug: 399286786
-
- Feb 28, 2025
-
-
Fei authored
3 function symbol(s) added 'int __traceiter_android_vh_free_mod_mem(void*, const struct module*)' 'int __traceiter_android_vh_set_mod_perm_after_init(void*, const struct module*)' 'int __traceiter_android_vh_set_mod_perm_before_init(void*, const struct module*)' 3 variable symbol(s) added 'struct tracepoint __tracepoint_android_vh_free_mod_mem' 'struct tracepoint __tracepoint_android_vh_set_mod_perm_after_init' 'struct tracepoint __tracepoint_android_vh_set_mod_perm_before_init' Bug: 373794466 Bug: 399785745 Change-Id: I9e76336db92e7b2b8ae2894ee92e45580e7e650d Signed-off-by:
Fei <xuefei7@xiaomi.com>
-
Fei authored
Add vendor hook for module init, so we can get memory type and addr info, then we can use this info to set corresponding memory access atrribute in EL2 stage2 page table. We can get enhanced security protection by this way, as long as the stage2 page table is not corrupted. For releasing modules, corresponding page table attributes should be destroyed and restored. Bug: 373794466 Bug: 399785745 Change-Id: Ieccb3bdd1041dfe41a9c808a91cc19f04389e826 Signed-off-by:
xuefei7 <xuefei7@xiaomi.com>
-
Hugh Dickins authored
Recent changes are putting more pressure on THP deferred split queues: under load revealing long-standing races, causing list_del corruptions, "Bad page state"s and worse (I keep BUGs in both of those, so usually don't get to see how badly they end up without). The relevant recent changes being 6.8's mTHP, 6.10's mTHP swapout, and 6.12's mTHP swapin, improved swap allocation, and underused THP splitting. Before fixing locking: rename misleading folio_undo_large_rmappable(), which does not undo large_rmappable, to folio_unqueue_deferred_split(), which is what it does. But that and its out-of-line __callee are mm internals of very limited usability: add comment and WARN_ON_ONCEs to check usage; and return a bool to say if a deferred split was unqueued, which can then be used in WARN_ON_ONCEs around safety checks (sparing callers the arcane conditionals in __folio_unqueue_deferred_split()). Just omit the folio_unqueue_deferred_split() from free_unref_folios(), all of whose callers now call it beforehand (and if any forget then bad_page() will tell) - except for its caller put_pages_list(), which itself no longer has any callers (and will be deleted separately). Swapout: mem_cgroup_swapout() has been resetting folio->memcg_data 0 without checking and unqueueing a THP folio from deferred split list; which is unfortunate, since the split_queue_lock depends on the memcg (when memcg is enabled); so swapout has been unqueueing such THPs later, when freeing the folio, using the pgdat's lock instead: potentially corrupting the memcg's list. __remove_mapping() has frozen refcount to 0 here, so no problem with calling folio_unqueue_deferred_split() before resetting memcg_data. That goes back to 5.4 commit 87eaceb3 ("mm: thp: make deferred split shrinker memcg aware"): which included a check on swapcache before adding to deferred queue, but no check on deferred queue before adding THP to swapcache. That worked fine with the usual sequence of events in reclaim (though there were a couple of rare ways in which a THP on deferred queue could have been swapped out), but 6.12 commit dafff3f4 ("mm: split underused THPs") avoids splitting underused THPs in reclaim, which makes swapcache THPs on deferred queue commonplace. Keep the check on swapcache before adding to deferred queue? Yes: it is no longer essential, but preserves the existing behaviour, and is likely to be a worthwhile optimization (vmstat showed much more traffic on the queue under swapping load if the check was removed); update its comment. Memcg-v1 move (deprecated): mem_cgroup_move_account() has been changing folio->memcg_data without checking and unqueueing a THP folio from the deferred list, sometimes corrupting "from" memcg's list, like swapout. Refcount is non-zero here, so folio_unqueue_deferred_split() can only be used in a WARN_ON_ONCE to validate the fix, which must be done earlier: mem_cgroup_move_charge_pte_range() first try to split the THP (splitting of course unqueues), or skip it if that fails. Not ideal, but moving charge has been requested, and khugepaged should repair the THP later: nobody wants new custom unqueueing code just for this deprecated case. The 87eaceb3 commit did have the code to move from one deferred list to another (but was not conscious of its unsafety while refcount non-0); but that was removed by 5.6 commit fac0516b ("mm: thp: don't need care deferred split queue in memcg charge move path"), which argued that the existence of a PMD mapping guarantees that the THP cannot be on a deferred list. As above, false in rare cases, and now commonly false. Backport to 6.11 should be straightforward. Earlier backports must take care that other _deferred_list fixes and dependencies are included. There is not a strong case for backports, but they can fix cornercases. Link: https://lkml.kernel.org/r/8dc111ae-f6db-2da7-b25c-7a20b1effe3b@google.com Fixes: 87eaceb3 ("mm: thp: make deferred split shrinker memcg aware") Fixes: dafff3f4 ("mm: split underused THPs") Change-Id: I86d7fcd68ca35171b679c76ad2a1e21584417fc6 Signed-off-by:
Hugh Dickins <hughd@google.com> Acked-by:
David Hildenbrand <david@redhat.com> Reviewed-by:
Yang Shi <shy828301@gmail.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Barry Song <baohua@kernel.org> Cc: Chris Li <chrisl@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Usama Arif <usamaarif642@gmail.com> Cc: Wei Yang <richard.weiyang@gmail.com> Cc: Zi Yan <ziy@nvidia.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> (cherry picked from commit f8f931bb) Bug: 378967818 Bug: 399794577 [ Fix conflict in mem_cgroup_move_account() in file memcontrol-v1.c and trivial conflict with renaming of function folio_unqueue_deferred_split() - yan Chang ] Signed-off-by:
yan Chang <changyan1@xiaomi.com>
-
Kefeng Wang authored
commit 593a10da upstream. Folios of order <= 1 are not in deferred list, the check of order is added into folio_undo_large_rmappable() from commit 8897277a ("mm: support order-1 folios in the page cache"), but there is a repeated check for small folio (order 0) during each call of the folio_undo_large_rmappable(), so only keep folio_order() check inside the function. In addition, move all the checks into header file to save a function call for non-large-rmappable or empty deferred_list folio. Link: https://lkml.kernel.org/r/20240521130315.46072-1-wangkefeng.wang@huawei.com Change-Id: I1d9811de36061b7df2cab9589e6bb5d6237d73fd Bug: 399794577 Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by:
David Hildenbrand <david@redhat.com> Reviewed-by:
Vishal Moola (Oracle) <vishal.moola@gmail.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Lance Yang <ioworker0@gmail.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Shakeel Butt <shakeel.butt@linux.dev> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> [ Upstream commit itself does not apply cleanly, because there are fewer calls to folio_undo_large_rmappable() in this tree. ] Signed-off-by:
Hugh Dickins <hughd@google.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit eb6b6d3e) Signed-off-by:
yan Chang <changyan1@xiaomi.com>
-
Matthew Wilcox (Oracle) authored
commit b7b098cf upstream. Patch series "Various significant MM patches". These patches all interact in annoying ways which make it tricky to send them out in any way other than a big batch, even though there's not really an overarching theme to connect them. The big effects of this patch series are: - folio_test_hugetlb() becomes reliable, even when called without a page reference - We free up PG_slab, and we could always use more page flags - We no longer need to check PageSlab before calling page_mapcount() This patch (of 9): For compound pages which are at least order-2 (and hence have a deferred_list), initialise it and then we can check at free that the page is not part of a deferred list. We recently found this useful to rule out a source of corruption. [peterx@redhat.com: always initialise folio->_deferred_list] Link: https://lkml.kernel.org/r/20240417211836.2742593-2-peterx@redhat.com Link: https://lkml.kernel.org/r/20240321142448.1645400-1-willy@infradead.org Link: https://lkml.kernel.org/r/20240321142448.1645400-2-willy@infradead.org Change-Id: Ib1a3574f8ff6b19f24af4704e9dde290c26bfc55 Bug: 399794577 Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
David Hildenbrand <david@redhat.com> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Oscar Salvador <osalvador@suse.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> [ Include three small changes from the upstream commit, for backport safety: replace list_del() by list_del_init() in split_huge_page_to_list(), like c010d47f ("mm: thp: split huge page to any lower order pages"); replace list_del() by list_del_init() in folio_undo_large_rmappable(), like 9bcef597 ("mm: memcg: fix split queue list crash when large folio migration"); keep __free_pages() instead of folio_put() in __update_and_free_hugetlb_folio(). ] Signed-off-by:
Hugh Dickins <hughd@google.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 0275e402) [ Fix conflict in split_huge_page_to_list() to ignore the function folio_ref_freeze() - yan Chang ] Signed-off-by:
yan Chang <changyan1@xiaomi.com>
-
Matthew Wilcox (Oracle) authored
commit 8897277a upstream. Folios of order 1 have no space to store the deferred list. This is not a problem for the page cache as file-backed folios are never placed on the deferred list. All we need to do is prevent the core MM from touching the deferred list for order 1 folios and remove the code which prevented us from allocating order 1 folios. Link: https://lore.kernel.org/linux-mm/90344ea7-4eec-47ee-5996-0c22f42d6a6a@google.com/ Link: https://lkml.kernel.org/r/20240226205534.1603748-3-zi.yan@sent.com Bug: 399794577 Change-Id: Ibaabee8a7dfd37adb407ee8e3861d301156f7aa5 Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by:
Zi Yan <ziy@nvidia.com> Cc: David Hildenbrand <david@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: Michal Koutny <mkoutny@suse.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yang Shi <shy828301@gmail.com> Cc: Yu Zhao <yuzhao@google.com> Cc: Zach O'Keefe <zokeefe@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Hugh Dickins <hughd@google.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit e8769509) [ Fix conflict in split_huge_page_to_list() to ignore the function folio_ref_freeze(), and delete the filter of order1 in function page_cache_ra_order() - yan Chang ] Signed-off-by:
yan Chang <changyan1@xiaomi.com>
-
Ryan Roberts authored
commit ec056cef upstream. The THP machinery does not support order-1 folios because it requires meta data spanning the first 3 `struct page`s. So order-2 is the smallest large folio that we can safely create. There was a theoretical bug whereby if ra->size was 2 or 3 pages (due to the device-specific bdi->ra_pages being set that way), we could end up with order = 1. Fix this by unconditionally checking if the preferred order is 1 and if so, set it to 0. Previously this was done in a few specific places, but with this refactoring it is done just once, unconditionally, at the end of the calculation. This is a theoretical bug found during review of the code; I have no evidence to suggest this manifests in the real world (I expect all device-specific ra_pages values are much bigger than 3). Bug: 399794577 Link: https://lkml.kernel.org/r/20231201161045.3962614-1-ryan.roberts@arm.com Change-Id: I5b024e995d3b85954cfb35d7df1c2fdcc9be9e16 Signed-off-by:
Ryan Roberts <ryan.roberts@arm.com> Reviewed-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Hugh Dickins <hughd@google.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit 2ad2067e) Signed-off-by:
yan Chang <changyan1@xiaomi.com>
-
Hugh Dickins authored
commit 23e48832 upstream. folio_prep_large_rmappable() is being used repeatedly along with a conversion from page to folio, a check non-NULL, a check order > 1: wrap it all up into struct folio *page_rmappable_folio(struct page *). Link: https://lkml.kernel.org/r/8d92c6cf-eebe-748-e29c-c8ab224c741@google.com Change-Id: Ide07d3577fc7ab6ee3ec8c0680dacfc5d22822c8 Signed-off-by:
Hugh Dickins <hughd@google.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Hildenbrand <david@redhat.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "Huang, Ying" <ying.huang@intel.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Tejun heo <tj@kernel.org> Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com> Cc: Yang Shi <shy828301@gmail.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Hugh Dickins <hughd@google.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> (cherry picked from commit bc899023) Signed-off-by:
yan Chang <changyan1@xiaomi.com> Bug: 399794577
-
- Feb 27, 2025
-
-
zhengshaobo1 authored
When total_req_power is 0, divvy_up_power() will set granted_power to 0, and cdev will be limited to the lowest performance. If our polling delay is set to 200ms, it means that cdev cannot perform better within 200ms even if cdev has a sudden load. This will affect the performance of cdev and is not as expected. For this reason, if nobody requests power, then set the granted power to the max_power. Signed-off-by:
zhengshaobo1 <zhengshaobo1@xiaomi.com> Bug: 375959779 Bug: 399554103 (cherry picked from commit 08eb0493 https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git thermal) Change-Id: I6e7360d8e6b886d9f1f23e9e4fd41f197605d520 Signed-off-by:
zhengshaobo1 <zhengshaobo1@xiaomi.com> (cherry picked from commit 0061c689)
-
fanshaohua authored
1 function symbol(s) added: __traceiter_android_vh_futex_wait_queue_start __tracepoint_android_vh_futex_wait_queue_start Bug: 377418046 Bug: 398137864 Signed-off-by:
fanshaohua <fanshaohua@xiaomi.com> Change-Id: I65669f4ed1c5cf3e330171aa53f0e581760e3000 Signed-off-by:
fanshaohua <fanshaohua@xiaomi.corp-partner.google.com> (cherry picked from commit 329bde2c) [fanshaohua: Resolved minor conflict in android/abi_gki_aarch64_xiaomi ] Signed-off-by:
fanshaohua <fanshaohua@xiaomi.com>
-
- Feb 25, 2025
-
-
luguohong authored
HID descriptors with Battery System (0x85) Charging (0x44) usage are ignored and POWER_SUPPLY_STATUS_DISCHARGING is always reported to user space, even when the device is charging. Map this usage and when it is reported set the right charging status. In addition, add KUnit tests to make sure that the charging status is correctly set and reported. They can be run with the usual command: $ ./tools/testing/kunit/kunit.py run --kunitconfig=drivers/hid Signed-off-by:
José Expósito <jose.exposito89@gmail.com> Signed-off-by:
Jiri Kosina <jkosina@suse.cz> Bug: 305125317 Bug: 398958000 Change-Id: Iad6a8177ad6954ad8ac2b714cc35acffcf2f226f (cherry picked from commit a608dc1c) Signed-off-by:
luguohong <luguohong@xiaomi.corp-partner.google.com> (cherry picked from commit 6465e295) (cherry picked from commit 68bb0a26) Signed-off-by:
xuyuqing <xuyuqing@xiaomi.corp-partner.google.com>
-
- Feb 21, 2025
-
-
fanshaohua authored
Add hook to monitor the waiting queue enqueueing. Bug: 377418046 Bug: 398137864 Change-Id: Ie0580151ffa9f1e7e94fc7347e23aca122cc03ec Signed-off-by:
fanshaohua <fanshaohua@xiaomi.corp-partner.google.com> (cherry picked from commit 44abc109)
-
zhanghui authored
1 function symbol(s) added 'int __traceiter_android_vh_filemap_map_pages_range(void*, struct file*, unsigned long, unsigned long, vm_fault_t)' 1 variable symbol(s) added 'struct tracepoint __tracepoint_android_vh_filemap_map_pages_range' Bug: 398123868 Change-Id: I789a16f5d0bc3d11b9518c548276b2ce19514ead Signed-off-by:
zhanghui <zhanghui31@xiaomi.com>
-