Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 44 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Joint Beamforming Design and Power Allocation for Full-Duplex NOMA Cognitive Relay Systems (1708.03915v1)

Published 13 Aug 2017 in cs.IT and math.IT

Abstract: In this paper, we consider a non-orthogonal multiple access cognitive radio network, where a full-duplex multi-antenna relay assists transmission from a base station (BS) to a cognitive far user, whereas, at the same time, the BS transmits to a cognitive near user. Our objective is to enlarge the far-near user rate region by maximizing the rate of the near user under a constraint that the rate of the far user is above a certain threshold. To this end, a non-convex joint optimization problem of relay beamforming and the transmit powers at the BS and cognitive relay is solved as a semi-definite relaxation problem, in conjunction with an efficiently solvable line-search approach. For comparisons, we also consider low complexity fixed beamformer design, where the optimum power allocation between the BS and cognitive relay is solved. Our results demonstrate that the proposed joint optimization can significantly reduce the impact of the residual self-interference at the FD relay and inter-user interference in the near user case.

Citations (16)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.