Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Predicting Drug-Gene Relations via Analogy Tasks with Word Embeddings (2406.00984v3)

Published 3 Jun 2024 in cs.CL

Abstract: Natural language processing (NLP) is utilized in a wide range of fields, where words in text are typically transformed into feature vectors called embeddings. BioConceptVec is a specific example of embeddings tailored for biology, trained on approximately 30 million PubMed abstracts using models such as skip-gram. Generally, word embeddings are known to solve analogy tasks through simple vector arithmetic. For instance, $\mathrm{\textit{king}} - \mathrm{\textit{man}} + \mathrm{\textit{woman}}$ predicts $\mathrm{\textit{queen}}$. In this study, we demonstrate that BioConceptVec embeddings, along with our own embeddings trained on PubMed abstracts, contain information about drug-gene relations and can predict target genes from a given drug through analogy computations. We also show that categorizing drugs and genes using biological pathways improves performance. Furthermore, we illustrate that vectors derived from known relations in the past can predict unknown future relations in datasets divided by year. Despite the simplicity of implementing analogy tasks as vector additions, our approach demonstrated performance comparable to that of LLMs such as GPT-4 in predicting drug-gene relations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Hiroaki Yamagiwa (13 papers)
  2. Ryoma Hashimoto (1 paper)
  3. Kiwamu Arakane (1 paper)
  4. Ken Murakami (1 paper)
  5. Shou Soeda (1 paper)
  6. Momose Oyama (8 papers)
  7. Mariko Okada (1 paper)
  8. Hidetoshi Shimodaira (45 papers)
  9. Yihua Zhu (4 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com