no code implementations • 21 May 2024 • Jjahao Zhang, Yin Gu, Deyu Sun, Yuhua Gao, Ming Gao, Ming Cui, Teng Zhang, He Ma
The fusion of the information characteristics of both computed tomography (CT) and magnetic resonance imaging(MRI) modalities may be useful in achieving a precise outline of the extent of paracervical tissue invasion.
no code implementations • 14 Mar 2024 • Jianwei Sun, Chaoyang Mei, Linlin Wei, Kaiyu Zheng, Na Liu, Ming Cui, Tianyi Li
The efficacy of large language models (LLMs) is heavily dependent on the quality of the underlying data, particularly within specialized domains.
no code implementations • 25 Jan 2024 • Shun Fang, Xing Feng, Ming Cui
The hierarchical transmittance fields are fed into a 3D-CNN network to compute more important transmittance features.
no code implementations • 23 Jan 2024 • Shun Fang, Ming Cui, Xing Feng, Yanan Zhang
NeRF's high-quality scene synthesis capability was quickly accepted by scholars in the years after it was proposed, and significant progress has been made in 3D scene representation and synthesis.
no code implementations • 23 Jan 2024 • Shun Fang, Ming Cui, Xing Feng, Yanna Lv
Neural Radiation Field (NeRF) technology can learn a 3D implicit model of a scene from 2D images and synthesize realistic novel view images.
no code implementations • 5 Jan 2024 • Na Liu, Liangyu Chen, Xiaoyu Tian, Wei Zou, Kaijiang Chen, Ming Cui
This paper introduces RAISE (Reasoning and Acting through Scratchpad and Examples), an advanced architecture enhancing the integration of Large Language Models (LLMs) like GPT-4 into conversational agents.
no code implementations • 27 Oct 2023 • Xiaoyu Tian, Liangyu Chen, Na Liu, Yaxuan Liu, Wei Zou, Kaijiang Chen, Ming Cui
The fast thinking model serves as the primary interface for external interactions and initial response generation, evaluating the necessity for engaging the slow thinking model based on the complexity of the complete response.