Search Results for author: Bingbing Li

Found 19 papers, 5 papers with code

Gland segmentation via dual encoders and boundary-enhanced attention

no code implementations29 Jan 2024 Huadeng Wang, Jiejiang Yu, Bingbing Li, Xipeng Pan, Zhenbing Liu, Rushi Lan, Xiaonan Luo

Accurate and automated gland segmentation on pathological images can assist pathologists in diagnosing the malignancy of colorectal adenocarcinoma.

Segmentation

Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM

no code implementations22 Jan 2024 Bingbing Li, Geng Yuan, Zigeng Wang, Shaoyi Huang, Hongwu Peng, Payman Behnam, Wujie Wen, Hang Liu, Caiwen Ding

Resistive Random Access Memory (ReRAM) has emerged as a promising platform for deep neural networks (DNNs) due to its support for parallel in-situ matrix-vector multiplication.

CGCE: A Chinese Generative Chat Evaluation Benchmark for General and Financial Domains

1 code implementation23 May 2023 Xuanyu Zhang, Bingbing Li, Qing Yang

Generative chat models, such as ChatGPT and GPT-4, have revolutionized natural language generation (NLG) by incorporating instructions and human feedback to achieve significant performance improvements.

Text Generation

A Novel Dataset and a Deep Learning Method for Mitosis Nuclei Segmentation and Classification

no code implementations27 Dec 2022 Huadeng Wang, Zhipeng Liu, Rushi Lan, Zhenbing Liu, Xiaonan Luo, Xipeng Pan, Bingbing Li

In addition, the model also achieves good performance on GZMH dataset, which is prepared by our group and will be firstly released with the publication of this paper.

Segmentation

PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision

1 code implementation CVPR 2022 Kehong Gong, Bingbing Li, Jianfeng Zhang, Tao Wang, Jing Huang, Michael Bi Mi, Jiashi Feng, Xinchao Wang

Existing self-supervised 3D human pose estimation schemes have largely relied on weak supervisions like consistency loss to guide the learning, which, inevitably, leads to inferior results in real-world scenarios with unseen poses.

3D Human Pose Estimation Hallucination

RestainNet: a self-supervised digital re-stainer for stain normalization

no code implementations28 Feb 2022 Bingchao Zhao, Jiatai Lin, Changhong Liang, Zongjian Yi, Xin Chen, Bingbing Li, Weihao Qiu, Danyi Li, Li Liang, Chu Han, Zaiyi Liu

In this paper, we formulated stain normalization as a digital re-staining process and proposed a self-supervised learning model, which is called RestainNet.

Self-Supervised Learning

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

no code implementations ACL 2022 Shaoyi Huang, Dongkuan Xu, Ian E. H. Yen, Yijue Wang, Sung-En Chang, Bingbing Li, Shiyang Chen, Mimi Xie, Sanguthevar Rajasekaran, Hang Liu, Caiwen Ding

Conventional wisdom in pruning Transformer-based language models is that pruning reduces the model expressiveness and thus is more likely to underfit rather than overfit.

Knowledge Distillation

Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices

no code implementations12 Feb 2021 Yuhong Song, Weiwen Jiang, Bingbing Li, Panjie Qi, Qingfeng Zhuge, Edwin Hsing-Mean Sha, Sakyasingha Dasgupta, Yiyu Shi, Caiwen Ding

Specifically, RT3 integrates two-level optimizations: First, it utilizes an efficient BP as the first-step compression for resource-constrained mobile devices; then, RT3 heuristically generates a shrunken search space based on the first level optimization and searches multiple pattern sets with diverse sparsity for PP via reinforcement learning to support lightweight software reconfiguration, which corresponds to available frequency levels of DVFS (i. e., hardware reconfiguration).

AutoML

FTRANS: Energy-Efficient Acceleration of Transformers using FPGA

no code implementations16 Jul 2020 Bingbing Li, Santosh Pandey, Haowen Fang, Yanjun Lyv, Ji Li, Jieyang Chen, Mimi Xie, Lipeng Wan, Hang Liu, Caiwen Ding

In natural language processing (NLP), the "Transformer" architecture was proposed as the first transduction model replying entirely on self-attention mechanisms without using sequence-aligned recurrent neural networks (RNNs) or convolution, and it achieved significant improvements for sequence to sequence tasks.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.