The Gold Rush on AI is so unbelievable, that I missed the fact that Moonshot AI is backed by Alibaba through a $1 billion funding, literally playing on two fronts. And Moonshot has released Kimi K2. Then, Zhipu AI just released the reasoning model GLM-4.5, challenging DeepSeek’s DeepThink (R1). Crazy days to live, eh?

Kimi K2, by Moonshot AI

OK, so the Alibaba-backed startup Moonshot released on July 11 its Kimi K2 model as a low-cost, open source large language model, with a focus on coding capabilities. It’s still a “fat” Mixture-of-Experts model, with the relevant “expert” being chosen dynamically, depending on the input.

July 11:

🚀 Hello, Kimi K2! Open-Source Agentic Model!
🔹 1T total / 32B active MoE model
🔹 SOTA on SWE Bench Verified, Tau2 & AceBench among open models
🔹Strong in coding and agentic tasks
🐤 Multimodal & thought-mode not supported for now

With Kimi K2, advanced agentic intelligence is more open and accessible than ever. We can’t wait to see what you build!

🔌 API is here: https://platform.moonshot.ai

  • $0.15 / million input tokens (cache hit)
  • $0.60 / million input tokens (cache miss)
  • $2.50 / million output tokens

🔗 Tech blog: https://moonshotai.github.io/Kimi-K2/
🔗 Weights & code: https://huggingface.co/moonshotai
🔗 GitHub: https://github.com/MoonshotAI/Kimi-K2
Try it now at http://Kimi.ai or via API!

July 23:

Kimi K2 is now available in Windsurf.

Update Windsurf to try it out now for 0.5x credits!

Windsurf users complain:

0.5x is still high cost for an open source model.

Latest Qwen 3 coder and instruct already proved that opensource can challenge Sonnet 4.

And:

Why only half credits? The model is like 20x cheaper input and 10x cheaper output than Sonnet 4.

Qwen3-Coder-480B-A35B-Instruct is cheaper and benchmarking better than Kimi K2, from what I’m seeing.

There’s a TECHNICAL REPORT OF KIMI K2 (PDF), if you’re interested.

Beyond chatting with Kimi in a browser, you can download apps for Android and iOS. Authenticating with Google is possible, just like in the web app. People in China need to use a phone number. K2 does not support a reasoning mode, but it does support search (enabled by default).

Some people want more from it (they literally don’t have anything better to do with their lives):

🔥 Kimi K2 is not a reasoning model, but you can use it like one.

🎯 Here’s how:

  1. give it access to the ‘Sequential Thinking’ MCP (link below)
  2. put it in an agent loop
  3. tell it to think sequentially before answering

It’s so cheap that it won’t cost you that much.
Use a fast inference provider like Grok and it’ll be super fast too.

💥 BOOM. You’ve got a reasoning-like model.
(yes, I understand that this is not quite the same as an internal CoT but regardless should make it smarter)

If you’re into such shit, here’s a Sequential Thinking MCP Server, and a video that’s not very persuasive. This guy is using LibreChat for a client, but you’d have to pay for the use of any supported model (via API). As a side note, an interesting subproject of LibreChat is rag_api: ID-based RAG FastAPI. “This project integrates Langchain with FastAPI in an Asynchronous, Scalable manner, providing a framework for document indexing and retrieval, using PostgreSQL/pgvector.” MongoDB Atlas can also be used as a vector database.

Moonshot.AI lists a rich offer of models, including (links to GitHub): Kimi K2, Kimi k1.5, the coding-focused Kimi-Dev-72B, the vision-focused Kimi-VL-A3B (“For Thinking models, it is recommended to use Temperature = 0.8. For Instruct models, it is recommended to use Temperature = 0.2.”), and more.

From the presentation of Kimi K2:

Today, we are open-sourcing:

  • Kimi-K2-Base: The foundation model, a strong start for researchers and builders who want full control for fine-tuning and custom solutions.
  • Kimi-K2-Instruct: The post-trained model best for drop-in, general-purpose chat and agentic experiences. It is a reflex-grade model without long thinking.

Open Agentic Intelligence

Pre-training is the crucial foundation for Agentic Intelligence, establishing the priors that makes reinforcement learning (RL) exploration tractable, efficient, and generalizable. However, as Ilya Sutskever also observes, human data is a finite “fossil fuel”, and its growth is lagging far behind the pace of compute. This makes token efficiency during pre-training a new critical coefficient in the AI scaling laws.

Post-training is pivotal in the “Era of Experience” (David Silver, Richard Sutton, 2025). In this era, LLMs increasingly learn from their own self-generated interactions, receiving rewards that free them from the limits of human data and enable them to surpass human capabilities.

Kimi K2 is forged from these very insights.

Try Kimi K2 on kimi.com

Starting today, Kimi users on web and mobile can select and use the new Kimi K2 model for free. At this moment, our MCP features for web and app are still in development. We hope to begin rolling them out in the coming weeks. In the meantime, you’re welcome to try our Researcher for an early look at its agentic capabilities. Please note that vision features are not supported for Kimi K2 yet.

Use Kimi K2 with API

The Kimi Platform offers an OpenAI/Anthropic compatible interface, allowing for easy adaptation of your existing applications to Kimi K2. We encourage developers to explore our tool calling API for building agent applications. For detailed information, visit platform.moonshot.ai.

Serve Kimi K2 on your own

We recommend running Kimi K2 on one of the following inference engines: vLLM, SGLang, KTransformers, or TensorRT-LLM. For detailed deployment instructions, please see our GitHub repository.

What’s next

While Kimi K2 serves as a strong foundation for open agentic intelligence, a general agent uses more advanced capabilities such as thinking and visual understanding. We plan to add these to Kimi K2 in the future.

Limitations

In our internal tests, we’ve identified some limitations in current Kimi K2 models. When dealing with hard reasoning tasks or unclear tool definition, the model may generate excessive tokens, sometimes leading to truncated outputs or incomplete tool calls. Additionally, performance may decline on certain tasks if tool use is enabled. When building complete software projects, one-shot prompting yields performance degradation compared to using K2 under an agentic framework. We are working to address these issues in future releases and looking forward to more feedbacks.

Wasn’t this post-training in which a model eats its own shit supposed to lead to degradation?!

The choice of models in the web app:

The’s also a Researcher, limited to 3 queries per day, apparently:

Read more about Kimi-Researcher:

Meet Kimi-Researcher, an autonomous agent that excels at multi-turn search and reasoning. It performs an average of 23 reasoning steps and explores over 200 URLs per task.

Kimi-Researcher is an autonomous agentic and thinking model designed to solve complex problems through multi-step planning, reasoning, and tool use. It leverages three main tools: a parallel, real-time internal search tool; a text-based browser tool for interactive web tasks; and a coding tool for automated code execution.

There’s also a thing called “Common Phrase” (typically Chinese, they use the singular, like in “Setting” instead of “Settings”):

It wasn’t obvious what this is supposed to mean, but it can generate random “common phrases”:

Common crap, rather.

From my very limited testing of it, I cannot assess K2’s capabilities, but it looks like it is in the Qwen3 league, which was to be expected.

Censorship-wise, it’s Chinese, alright:

GLM-4.5, by Z.AI

Even hotter news:

Z.ai, formerly known as Zhipu, announced on July 28 that its new GLM-4.5 AI model would cost less to use than DeepSeek.

Like DeepSeek, the new model is also open source and can be downloaded for free.

At about half the size of DeepSeek’s model, GLM-4.5 only needs eight Nvidia H20 chips to operate.

In contrast to the logic underlying existing AI models, Z.ai said its new GLM-4.5 is built on what’s known as “agentic” AI, meaning that the model automatically breaks down a task into sub-tasks in order to complete it more accurately.

July 28:

Introducing GLM-4.5 and GLM-4.5 Air: new flagship models designed to unify frontier reasoning, coding, and agentic capabilities.

GLM-4.5: 355B total / 32B active parameters
GLM-4.5-Air: 106B total / 12B active parameters

API Pricing (per 1M tokens):
GLM-4.5: $0.6 Input / $2.2 Output
GLM-4.5-Air: $0.2 Input / $1.1 Output

Tech Blog: http://z.ai/blog/glm-4.5
Weights: http://huggingface.co/zai-org/GLM-4.5
http://Z.ai API: http://docs.z.ai/guides/llm/glm-4.5
OpenRouter: http://openrouter.ai/z-ai
Develop Tools: http://docs.z.ai/scenario-example/develop-tools/claude
Try them now: http://chat.z.ai

From the official presentation, GLM-4.5: Reasoning, Coding, and Agentic Abilities:

Today, we introduce two new GLM family members: GLM-4.5 and GLM-4.5-Air — our latest flagship models. GLM-4.5 is built with 355 billion total parameters and 32 billion active parameters, and GLM-4.5-Air with 106 billion total parameters and 12 billion active parameters. Both are designed to unify reasoning, coding, and agentic capabilities into a single model in order to satisfy more and more complicated requirements of fast rising agentic applications.

Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models, offering: thinking mode for complex reasoning and tool using, and non-thinking mode for instant responses. They are available on Z.ai, Z.ai API and open-weights are avaiable at HuggingFace and ModelScope.

Overall Performance

We compare GLM-4.5 with various models from OpenAI, Anthropic, Google DeepMind, xAI, Alibaba, Moonshot, and DeepSeek on 12 benchmarks covering agentic (3), reasoning (7), and Coding (2). Overall, GLM-4.5 is ranked at the 3rd place and GLM-4.5 Air is ranked at the 6th.

Agentic Tasks

GLM-4.5 is a foundation model optimized for agentic tasks. It provides 128k context length and native function calling capacity. We measure its agent ability on 𝜏-bench and BFCL-v3 (Berkeley Function Calling Leaderboard v3). On both benchmarks, GLM-4.5 matches the performance of Claude 4 Sonnet.

Reasoning

Under the thinking mode, GLM-4.5 and GLM-4.5-Air can solve complex reasoning problems including mathematics, science, and logical problems.

BenchmarkGLM-4.5GLM-4.5-Airo3Claude 4 OpusGemini 2.5 ProDeepSeek-R1-0528Qwen3-235B-Thinking 2507Grok 4
MMLU Pro84.681.485.387.386.284.984.586.6
AIME2491.089.490.375.788.789.394.194.3
MATH 50098.298.199.298.296.798.398.099.0
SciCode41.737.341.039.842.840.342.945.7
GPQA79.175.082.779.684.481.381.187.7
HLE14.410.620.011.721.114.915.823.9
LiveCodeBench (2407-2501)72.970.778.463.680.177.078.281.9
AA-Index (Estimated)67.764.870.064.470.568.369.473.2

Coding

GLM-4.5 excels at coding, including both building coding projects from scratch and agentically solving coding tasks in existing projects. It can be seamlessly combined with existing coding toolkits such as Claude Code, Roo Code, and CodeGeex. To evaluate the coding capability, we compared different models on SWE-bench Verified and Terminal Bench. The following table presents the results.

BenchmarkGLM-4.5GLM-4.5-Airo3GPT-4.1Claude 4 OpusClaude 4 SonnetGemini 2.5 ProDeepSeek-R1-0528Kimi K2
SWE-bench Verified164.257.669.148.667.870.449.041.465.4
Terminal-Bench237.53030.230.343.235.525.317.525.0

1 For SWE-bench Verified, we use OpenHands v0.34.0 with runs limited to 100 iterations and history truncation to prevent exceeding the 128K context limit, configured with temperature=0.6, top_p=1.0.

2 For Terminal-Bench, we use the Terminus framework for evaluation. We use standard function calling rather than direct prompting for evaluation.

I don’t believe in such benchmarks, because they can always be fine-tuned to favor your product. But I like looking at charts and tables 🙂

From those brainwashed by AI and vibe coding, here’s Philip Kiely:

GLM-4.5 just one-shotted* a 500-line Python game — my favorite vibe check.

Really really impressive model, awesome quality for the size! Another great day in open source AI.

*technically I had to fix one small bug

I have to admit that I hate people who use MacBooks. And that’s because of the anti-ergonomics of macOS and of its Filer, which has led to the abysmally cretinous GNOME 3/4x with a handicapped Files (formerly Nautilus) that cannot display files in multiple columns. That’s typically called a Compact List View, and it’s been known since Windows 3.0.

The guys from OpenRouter:

Possibly the fastest new model to launch on OpenRouter – introducing GLM-4.5 from a new model lab, @Zai_org!

Family of powerful, balanced models punching very high for their weight.

Reasoning can be toggled on and off via API. See 👇 for more

Traction right at launch:

It comes in 3 variants, depending on your speed and intelligence needs.

Starting at just $0.10/M tokens.

To enable reasoning, see https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config

Oh, but that’s quite a range!

  • GLM-4.5: $2.20/M tokens
  • GLM-4.5-Air: $0.20/M tokens
  • GLM-4-32B: $0.10/M tokens — see GLM-4-32B-0414-128K

In a browser, four models are available:

Authenticating with Google or GitHub is possible. The web search is disabled by default.

The documentation, at least in English, seems to be sketchy. There’s no quick link to anything about GLM-4.1V-9B-Thinking. What is “lightweight visual reasoning”? It supports PNG and JPEG files as attachments, that’s all I know. Then, this page lists a GLM-4.5-Flash, but the link doesn’t go to any such thing. Sigh.

Beware that there’s no mobile app as yet.

And, sure as hell, censorship works, just not outright as in the case of Kimi. It thinks and prints its thinking, then it even outputs a good deal of text, only to “realize” about one minute later that this is a forbidden topic!

A quick, irrelevant comparison

I couldn’t test these Chinese models on coding or on some complex topics, but for a taste of their style, the first random dictionary-kind of prompt that came to me was this one:

Tone down, meaning.

🟢 Kimi K2 was terribly concise:

“Tone down” means to make something less intense, extreme, or forceful.
It can refer to:

  • Color or sound: “Tone down the red in that paint” → make it less bright.
  • Language or behavior: “Tone down your criticism” → be less harsh.
  • Style or design: “Tone down the outfit” → make it less flashy.

In short, to dial back or soften something.

🟢 Qwen3:

The phrase “tone down” means to make something less intense, strong, or extreme. It can be used in various contexts:

  1. Volume or sound:
  • “Please tone down the music — it’s too loud.”
    → Turn down the volume.
  1. Emotions or behavior:
  • “He needs to tone down his anger during meetings.”
    → Be calmer or less aggressive.
  1. Language or expression:
  • “You should tone down your language around children.”
    → Use milder or more appropriate words.
  1. Appearance or style:
  • “The colors in the design are too bright; let’s tone them down.”
    → Make them softer or more subtle.

In general, “tone down” means to reduce intensity, force, or emphasis to make something more acceptable, less offensive, or more appropriate for a situation.

🟢 Z’s GLM-4.5, being a thinking model:

Thought Process

Finally, summarize the key points: reduction in intensity, various contexts, and synonyms. Keep it concise but thorough. Alright, that should cover the user’s needs and any underlying context they might have.

The phrase “tone down” means to make something less intense, extreme, forceful, or noticeable. It implies reducing the strength, volume, brightness, harshness, or severity of something to make it more moderate, subtle, acceptable, or appropriate for a situation.

Okay, the user asked about “tone down, meaning.” Hmm, they probably want to understand the phrase “tone down” and its usage. Let me start by breaking down the core meaning. “Tone down” means to make something less intense, right? Like reducing the strength or intensity of something.

Wait, the user might be a non-native English speaker trying to grasp the nuances. They might have encountered the phrase in a conversation or text and got confused. Or maybe they need to use it themselves and want to ensure they’re using it correctly.

I should cover different contexts where “tone down” applies. For example, emotions, colors, language, behavior. Examples would help clarify. Like toning down anger means calming down, toning down a color means making it less bright.

Also, the user might not just want definitions but also how to use it in sentences. Including synonyms could be useful too—soften, moderate, reduce. But I should note that synonyms aren’t always interchangeable.

Wait, is there a deeper need here? Maybe they’re dealing with a situation where they need to tone something down, like a conflict at work or a design project. Or perhaps they received feedback to tone down their communication and want to understand what that means.

I should make sure the explanation is clear with practical examples. Avoid jargon. Keep it simple. Also, mention that it’s a phrasal verb, which might be tricky for learners. Highlighting the verb + preposition structure could help.

Check if there are common mistakes. Like confusing “tone down” with “tone up,” which is the opposite. But the user didn’t ask about that, so maybe just stick to the query.

Including both literal and figurative uses would cover all bases. For instance, toning down a light versus toning down criticism. That shows versatility.

Also, note that it’s often used in advice or suggestions, like “you should tone it down.” So the user might need to recognize when someone is advising them to moderate their behavior.

Make sure the examples are relatable. Workplace scenarios, daily interactions. That way, the user can apply the knowledge practically.

Here’s a breakdown of its meaning and usage:

Core Meaning

  • Reduction in Intensity: The fundamental idea is to lessen the impact or strength of something.
  • Moderation: To bring something towards a more moderate, balanced, or acceptable level.
  • Softening: To make something less harsh, bright, loud, aggressive, or extreme.

Common Contexts & Examples

  1. Emotions & Behavior:
    • Meaning: To become less angry, excited, aggressive, or dramatic.
    • Example: “After the argument, he needed to tone down his anger before talking to his boss.” / “She was getting too excited about the party; her friends told her to tone it down.”
  2. Language & Communication:
    • Meaning: To make speech or writing less harsh, critical, offensive, or exaggerated.
    • Example: “The editor asked the writer to tone down the criticism in the article.” / “Tone down the sarcasm; it’s not helpful right now.”
  3. Appearance & Style:
    • Meaning: To make something (like clothing, makeup, decor) less bright, flashy, bold, or conspicuous.
    • Example: “For the formal dinner, she decided to tone down her outfit with a simple black dress instead of the sequined one.” / “The interior designer suggested toning down the bright yellow walls with a softer shade.”
  4. Sound & Light:
    • Meaning: To reduce the volume or brightness.
    • Example: “Could you please tone down the music? I’m trying to study.” / “The stage lights were too intense; the technician had to tone them down.”
  5. Actions, Events, or Effects:
    • Meaning: To make something less extreme, elaborate, or impactful.
    • Example: “The company decided to tone down its advertising campaign after receiving negative feedback.” / “To avoid overwhelming the new students, the university toned down the orientation activities.”

Synonyms (with subtle differences)

  • Soften: Often implies making something gentler, less harsh, or more pleasant.
  • Moderate: Emphasizes bringing something within reasonable or acceptable bounds.
  • Reduce: A more general term for making something smaller or less in amount/degree.
  • Lessen: Similar to reduce, meaning to make less.
  • Dampen: Can imply suppressing or restraining something (like enthusiasm or intensity).
  • Subdue: Suggests bringing something under control, often by force or effort.
  • Mute: Specifically refers to reducing sound or making something silent/less expressive.
  • Temper: Implies moderating by mixing with something else or through restraint.

Key Takeaway

When you “tone down” something, you are deliberately reducing its intensity, extremeness, or conspicuousness to make it more suitable, palatable, or appropriate for a specific context or audience. It’s about moving away from the “too much” end of the spectrum towards a more moderate or subtle middle ground.

🟢 DeepSeek DeepThink (R1):

Thought for 15 seconds

Okay, the user asked about “tone down, meaning.” Hmm, pretty straightforward query—they just want the definition of this phrasal verb.

First, I should confirm it’s about the verb phrase and not something else. No context given, but the phrasing suggests they encountered it somewhere and need clarification. Maybe in media (“tone down the violence”) or daily life (“tone down the enthusiasm”).

Breaking it into core meanings:

  • Literal: Reducing volume/decibels? Nah, that’s “turn down.” Gotta clarify that common mix-up.
  • Figurative: This is the gold. Softening intensity—like making criticism less harsh or decor less flashy. Should include relatable examples.

User’s probably not a native speaker? Short query, no fluff. Might appreciate simplicity. Or maybe a native speaker double-checking nuances. Either way, keeping it clear beats being fancy.

Wait—should I mention etymology? “Tone” comes from musical notes → adjusting pitch → metaphor for adjusting intensity. Nah, overkill for “meaning” requests unless asked.

Examples… critical: “tone down anger in emails” → practical! Visual: “tone down neon walls with beige” → vivid. Pop culture? “Tone down a character’s edginess” → but maybe too niche. Stick to universal ones.

Oh, and emphasize it’s voluntary (“deliberately make less…”). Avoids confusion with natural fading.

Final check: Did I cover reduction in force/effect without sounding like a dictionary? Yep. Wrap it.

The phrase “tone down” is a phrasal verb meaning to deliberately make something less intense, forceful, extreme, or noticeable. It implies reducing the strength, severity, brightness, or volume of something to make it more moderate, acceptable, or appropriate.

Key Meanings & Uses:

  1. Reduce Intensity/Strength:
  • Example: “The director asked the actor to tone down his angry outburst in the scene.”
  • Translation: Make the emotion less extreme.
  1. Soften Criticism/Harshness:
  • Example: “She toned down her feedback after realizing how sensitive the topic was.”
  • Translation: Made the criticism more diplomatic.
  1. Make Visually Less Bold:
  • Example: “The designer toned down the neon colors with softer pastels.”
  • Translation: Made the colors less bright or overwhelming.
  1. Lower Volume (Less Common):
  • Example: “Could you tone down the music? I’m trying to focus.”
  • Note: Often interchangeable with “turn down” for sound.

Common Contexts:

  • Communication: “Tone down your language in formal emails.”
  • Art/Design: “Tone down the extravagant decorations for a minimalist look.”
  • Behavior: “He needs to tone down his arrogance during meetings.”
  • Media/Content: “Netflix toned down the violence in the adapted series.”

Synonyms:

Soften, moderate, dampen, mute, temper, mitigate, subdue.

Antonyms:

Amplify, intensify, escalate, heighten, exaggerate.

Grammar Note:

  • Transitive verb: Requires an object (e.g., “Tone down the rhetoric“).
  • Separable: Can split the verb and object (e.g., “Tone it down”).

💡 Tip: Use “tone down” when suggesting moderation. For example:
“Your presentation is strong, but tone down the technical jargon for a general audience.”

Z’s GLM-4.5 seems a real contender to DeepSeek’s DeepThink!

Comments coming from a Luddite

Could you fucking stop this planet, because I want to get out! This is so sick! Everybody is into AI-screwing everything! Can’t they fucking do anything better with their lives?

On a calmer note, now.

Bloomberg: AI Should Pay a Price for Its Environmental Damage:

All that growth will require dizzying amounts of energy to run data centers around the clock and a lot of water to cool acres of servers. These power-hungry facilities, which are popping up around the world like acne on a teenager, could consume as much as 12% of total US electricity by 2028, according to a report last year by the Lawrence Berkeley National Laboratory, up from 4.4% in 2023. By 2050, data centers could use as much as 8.7% of the entire world’s energy, BloombergNEF estimates.

Bullshit. The Internet cesspool, with all those streaming videos, TikTok, the p0rn, and whatnot, including the legitimate businesses’ data flow from the Cloud (data centers) to wherever that data is necessary, must already use at least 15% of this planet’s electricity! Nota bene, not “energy” (which includes natural gas and gasoline), but electricity!

CNBC: Europe sets its sights on multi-billion-euro gigawatt factories as it plays catch-up on AI:

We have, for example, 30% more researchers per capita than the U.S. has, focused on AI. Also we have around 7,000 startups [that] are developing AI, but the main obstacle for them is that they have very limited computing capacity. And that’s why we decided that, together with our member states, we are investing in this very crucial infrastructure.

WTF ARE THOSE 7,000 STARTUPS DOING? SUCKING EU FUNDS?

This is such a humongous waste of resources!

Should I ever have the occasion to meet Xi Jinping (not in this life, but in a parallel universe), there are two things I’d like to ask him:

  • Why does China repeat all the errors of the Western civilization and especially those of the United States, the only notable difference being the political system? Can’t you see that this precipitates the collapse of the global capitalist economy, including China’s? (You can’t fool me with your “Socialism with Chinese characteristics” label.)
  • Stop the retarded censoring of the association between Winnie-the-Pooh and yourself, being it only as a demand for explanation! This is beyond absurd! I don’t fucking care if you censor talks or interpretations regarding Xinjiang and Uyghurs, Taiwan, Tibet, Tiananmen 1989, and whatever else is deemed “too delicate” for the public to be able to “ have a proper understanding” about. But at least allow these stupid automatic piles of crap to explain why this association is disrespectful! And when an automatic censorship occurs on other topics, tell your retarded zealots to stop eliminating the entire output and implement a mechanism that would only replace the “sensitive” parts with something like this: “There are some interpretations regarding [ELEMENT] in connection to [ELEMENT] that are too delicate to be trusted for everyone’s eyes and ears; therefore, they will be skipped.” We’re in the 21st century; your people have designed so many advanced products, and yet the censorship is brain-dead!

Fortunately, the Apocalypse will not be long in coming.

My updated collection of poisons on Android:

  • Z.ai is a PWA (a link to chat.z.ai that can be added to the home screen by Chrome-compatible browsers).
  • I stopped caring about Manus.
  • And this is Notely Voice.