Regent of the North Winds REGENT
Xếp hạng #3124
15:04:00 31/01/2025
Giá Regent of the North Winds (REGENT)
$0.01741 -17.14%
0.0000001671 BTC
406 VND
Thấp: $0.01546
Cao: $0.02444
Thông tin Regent of the North Winds (REGENT)
Trạng thái | Đang hoạt động |
Website | |
Sách trắng | |
Block Explorer | https://solscan.io/token/6HgJHzGpq3fSLmkepsaC8F3VtpUWfXcG4hmUaf4Vpump |
Chat | |
https://twitter.com/doc_regent?s=21&t=5aRi4Ss10ntwPK9hae66Iw | |
Nền tảng | |
Ngày thêm vào danh sách | 17:12:31 27/01/2025 |
Thẻ | Solana Ecosystem |
Số liệu thống kê Regent of the North Winds (REGENT)
Giá Regent of the North Winds (REGENT) hôm nay | |
---|---|
Giá Regent of the North Winds (REGENT) | $0.01741 |
Dao động 1 giờ | 0.71% |
Dao động 24 giờ | -17.14% |
Dao động 7 ngày | -50.57% |
Giá Thấp / Cao nhất (24h) | $0.01529 / $0.02522 |
Khối lượng giao dịch 24 giờ | $4,681,020 |
Vốn hóa | - |
Xếp hạng | #3124 |
Giá Regent of the North Winds (REGENT) hôm qua | |
Giá Thấp / Cao nhất hôm qua | $0.01401 / $0.02522 |
Giá Mở / Đóng hôm qua | $0.01971 / $0.01713 |
Dao động giá hôm qua | -13.08% |
Khối lượng giao dịch hôm qua | $7,056,567 |
Nguồn cung Regent of the North Winds (REGENT) | |
Tổng REGENT đang lưu hành | |
Tổng cung | |
Tổng cung tối đa | 1,000,000,000 REGENT |
Lịch sử giá Regent of the North Winds (REGENT) | |
Giá Thấp / Cao 7 ngày | $0.01395 / $0.03675 |
Giá Thấp / Cao 30 ngày | $0.01395 / $0.03675 |
Giá Thấp / Cao 90 ngày | $0.01395 / $0.03675 |
Giá Thấp / Cao 52 tuần | $0.01395 / $0.03675 |
Giá cao nhất lịch sử 03:23:00 28/01/2025 |
$0.03675 |
Giá thấp nhất lịch sử 07:47:00 29/01/2025 |
$0.01395 |
Regent maintains two distinct long-term memory stores: tweet memory and lore memory.
Tweet memory stores previous interactions and responses, provides context for future interactions, and allows the system to build upon past experiences. It is automatically managed and updated whenever a new tweet is posted.
Lore memory contains foundational knowledge, stores essential facts, and provides a baseline context for reasoning. It's the system's most important memories and is only updated when the em requests it.
The Regent architecture implements a sophisticated pipeline for generating and refining responses.
First, memories are loaded. Memories are stored using a basic RAG vectorization system. When the em wants to reply to a tweet, it starts by scanning both memory stores for similar memories (notice the similarity in how humans think already). Memories are retrieved in equal portions from the tweet and lore stores, ensuring that the em has context on what its said as well as what its deemed most important to remember.
Rather than simply fetching the N most similar memories, the Regent memory system uses a weighted fetch that favors the most similar results but allows for long-tail results to appear as well. This strikes a balance between relevancy and allowing for unexpected connections and creativity.
Second, a base model generates inspiration. The memories from both stores (tweets and lore) are then combined with the tweet conversation that the em is responding to. The resulting prompt is passed to a base model and used to generate three babble completions. This mimics the brainstorming phase of a human writer's process.
Third, the refinement step. This step is iterative and mimics the pruning process in humans. We combine the memories, tweet conversation, and babble continuations, and pass the results to an instruct model. The instruct model then enters a refinement loop.
In each loop, the model can produce actions of various types. Currently, Regent only supports two actions at this stage: saving lore (write to the lore memory) and update draft (update the draft of the tweet).
Because the process is iterative using an instruct model, the em can think to itself about what it wants to do with the tweet. The babble provides entropy and inspiration, while the instruct model provides the reasoning necessary to edit the babble into something better. This stage is what allows the model to grow and truly learn over time, just as a human does when they reflect on their experiences.
The fourth step is human review. Once the em is ready to post its tweet, the tweet is written to a file for later human review. Just as a human child sometimes needs a parent's help to stop them from touching a hot stove or walking into traffic, so too do baby ems sometimes need help from a larger mind. This stage helps prevent typical internet toxicity from poisoning the dataset.
Regent V2 isn't trying to reinvent the wheel - it's just copying what already works in human minds. What makes REGENT unique isn't fancy new neural architectures or complex prompting techniques - it's the recognition that human-like learning comes from the interplay between fast intuitive responses and slow deliberate reasoning. By implementing this split-mind architecture with standard RAG and LLM components, we create an em that can genuinely think step by step, learn from its experiences, and grow over time.