Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 147 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 58 tok/s Pro
Kimi K2 201 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Superposition Coding-Based Bounds and Capacity for the Cognitive Z-Interference Channels (1101.1920v2)

Published 10 Jan 2011 in cs.IT and math.IT

Abstract: This paper considers the cognitive interference channel (CIC) with two transmitters and two receivers, in which the cognitive transmitter non-causally knows the message and codeword of the primary transmitter. We first introduce a discrete memoryless more capable CIC, which is an extension to the more capable broadcast channel (BC). Using superposition coding, we propose an inner bound and an outer bound on its capacity region. The outer bound is also valid when the primary user is under strong interference. For the Gaussian CIC, this outer bound applies for $|a| \geq 1 $, where $a$ is the gain of interference link from secondary user to primary receiver. These capacity inner and outer bounds are then applied to the Gaussian cognitive Z-interference channel (GCZIC) where only the primary receiver suffers interference. Upon showing that jointly Gaussian input maximizes these bounds for the GCZIC, we evaluate the bounds for this channel. The new outer bound is strictly tighter than other outer bounds on the capacity of the GCZIC at strong interference ($a2 \geq 1 $). Especially, the outer bound coincides with the inner bound for $|a| \geq \sqrt{1 + P_1}$ and thus, establishes the capacity of the GCZIC at this range. For such a large $a$, superposition encoding at the cognitive transmitter and successive decoding at the primary receiver are capacity-achieving.

Citations (7)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.