Control plane
Draft
Detail editor for one draft. Save, approve into queue, or reject out of the editorial flow.
local DBprivate
Source
2026-03-21 10:57:24.000000
Very impressive: MSA (memory sparse attentions) is a so exciting because it lets AI models directly store and reason over massive long-term memory inside their attention system, without relying on external retrieval or lossy compression, making them far more accurate and scalable.
it allows 100M context window with minimal performance loss
primary quoted_tweetsecondary quote_wrapperref media
reference: https://x.com/elliotchen100/status/2034479369855590660
Quoted original
艾略特 (@elliotchen100) · Thu Mar 19 03:57:35 +0000 2026
论文来了。名字叫 MSA,Memory Sparse Attention。
一句话说清楚它是什么:
让大模型原生拥有超长记忆。不是外挂检索,不是暴力扩窗口,而是把「记忆」直接长进了注意力机制里,端到端训练。
过去的方案为什么不行?
RAG https://t.co/tOXz0pzc4J
Draft text
Req 2026-03-21T1101-TOP1
Queue membership is preserved when editing an already approved draft.