Icy: A benchmark for measuring compositional inductive bias of emergent communication models

29 Sep 2021  ·  Hugh Perkins ·

We present a benchmark \textsc{Icy} for measuring the compositional inductive bias of models in the context of emergent communications. We devise corrupted compositional grammars that probe for limitations in the compositional inductive bias of frequently used models. We use these corrupted compositional grammars to compare and contrast a wide range of models. We propose a hierarchical model, HU-RNN, which might show an inductive bias towards relocatable atomic groups of tokens, thus potentially encouraging the emergence of words. We experiment with probing for the compositional inductive bias of sender networks in isolation, and also placed end-to-end, with a receiver, as an auto-encoder. We propose a metric of compositionality, Compositional Entropy, that is fast to calculate, and broadly applicable.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here