Investigating semantic subspaces of Transformer sentence embeddings through linear structural probing (2310.11923v1)
Abstract: The question of what kinds of linguistic information are encoded in different layers of Transformer-based LLMs is of considerable interest for the NLP community. Existing work, however, has overwhelmingly focused on word-level representations and encoder-only LLMs with the masked-token training objective. In this paper, we present experiments with semantic structural probing, a method for studying sentence-level representations via finding a subspace of the embedding space that provides suitable task-specific pairwise distances between data-points. We apply our method to LLMs from different families (encoder-only, decoder-only, encoder-decoder) and of different sizes in the context of two tasks, semantic textual similarity and natural-language inference. We find that model families differ substantially in their performance and layer dynamics, but that the results are largely model-size invariant.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.