Search Results for author: Junlin Wang

Found 11 papers, 5 papers with code

Reasoning in Token Economies: Budget-Aware Evaluation of LLM Reasoning Strategies

no code implementations10 Jun 2024 Junlin Wang, Siddhartha Jain, Dejiao Zhang, Baishakhi Ray, Varun Kumar, Ben Athiwaratkun

A diverse array of reasoning strategies has been proposed to elicit the capabilities of large language models.

Ingenuity

Mixture-of-Agents Enhances Large Language Model Capabilities

no code implementations7 Jun 2024 Junlin Wang, Jue Wang, Ben Athiwaratkun, Ce Zhang, James Zou

With the growing number of LLMs, how to harness the collective expertise of multiple LLMs is an exciting open direction.

Language Modelling Large Language Model +1

FinRobot: An Open-Source AI Agent Platform for Financial Applications using Large Language Models

1 code implementation23 May 2024 Hongyang Yang, Boyu Zhang, Neng Wang, Cheng Guo, Xiaoli Zhang, Likun Lin, Junlin Wang, Tianyu Zhou, Mao Guan, Runjia Zhang, Christina Dan Wang

As financial institutions and professionals increasingly incorporate Large Language Models (LLMs) into their workflows, substantial barriers, including proprietary data and specialized knowledge, persist between the finance sector and the AI community.

AI Agent Decision Making +2

LLM-Resistant Math Word Problem Generation via Adversarial Attacks

1 code implementation27 Feb 2024 Roy Xie, Chengxuan Huang, Junlin Wang, Bhuwan Dhingra

Large language models (LLMs) have significantly transformed the educational landscape.

Math

Maestro: A Gamified Platform for Teaching AI Robustness

no code implementations14 Jun 2023 Margarita Geleta, Jiacen Xu, Manikanta Loya, Junlin Wang, Sameer Singh, Zhou Li, Sergio Gago-Masague

We assessed Maestro's influence on students' engagement, motivation, and learning success in robust AI.

Active Learning

NeuroComparatives: Neuro-Symbolic Distillation of Comparative Knowledge

1 code implementation8 May 2023 Phillip Howard, Junlin Wang, Vasudev Lal, Gadi Singer, Yejin Choi, Swabha Swayamdipta

We introduce NeuroComparatives, a novel framework for comparative knowledge distillation overgenerated from language models such as GPT-variants and LLaMA, followed by stringent filtering of the generated knowledge.

Knowledge Distillation valid +1

Primal Dual Alternating Proximal Gradient Algorithms for Nonsmooth Nonconvex Minimax Problems with Coupled Linear Constraints

no code implementations9 Dec 2022 Huiling Zhang, Junlin Wang, Zi Xu, Yu-Hong Dai

The iteration complexity of the two algorithms are proved to be $\mathcal{O}\left( \varepsilon ^{-2} \right)$ (resp.

GAP-Gen: Guided Automatic Python Code Generation

1 code implementation19 Jan 2022 Junchen Zhao, Yurun Song, Junlin Wang, Ian G. Harris

In this work, we propose GAP-Gen, a Guided Automatic Python Code Generation method based on Python syntactic constraints and semantic constraints.

Ranked #2 on Code Generation on CodeXGLUE - CodeSearchNet (using extra training data)

Code Generation

Gradient-based Analysis of NLP Models is Manipulable

no code implementations Findings of the Association for Computational Linguistics 2020 Junlin Wang, Jens Tuyls, Eric Wallace, Sameer Singh

Gradient-based analysis methods, such as saliency map visualizations and adversarial input perturbations, have found widespread use in interpreting neural NLP models due to their simplicity, flexibility, and most importantly, their faithfulness.

text-classification Text Classification

AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

1 code implementation IJCNLP 2019 Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh

Neural NLP models are increasingly accurate but are imperfect and opaque---they break in counterintuitive ways and leave end users puzzled at their behavior.

Language Modelling Masked Language Modeling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.