I’m not genuinely captivated by the AI; quite the opposite is closer to the truth. But given the hundreds of billions invested so far in AI and the hysteria that surrounds the major AI chatbots and stuff, I had to keep up with the retards, didn’t I? Now, after some “radio silence” regarding the chatbots I’m occasionally using, I thought of posting various AI-related tidbits collected in the last weeks. Anchors: 1234.

Hallucinations, Hallucinations, Hallucinations!

I don’t remember how I ran into this study: We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs. The six authors are from the University of Texas at San Antonio, the University of Oklahoma, and Virginia Tech. Versions: June 12, 2024 (v1); Sept. 24, 2024 (v2); March 2, 2025 (v3).

There’s nothing new that the AI hallucinates packages, code, and pretty much everything. But this study seems to reveal some counterintuitive evolutions or involutions. Let me guide you into the relevant things you might want to know.

Here’s Table 1: Details of the models that were evaluated. Then, Figure 2: Observed hallucination rates of the tested models:

For more precision, Table 7: Hallucination Percentages for all models tested using Python code and Table 8: Hallucination percentages for all models tested using JavaScript code.

It’s only normal when a smaller model hallucinates more than a larger model: DeepSeek 1B has a total hallucination rate for JS of 27.45%, whereas DeepSeek 33B has a total hallucination rate for JS of 17.12%. But for Python, the smaller model is more trustworthy than the larger one: 13.63% vs. 16.53%! That’s counterintuitive. Overall and across languages, DeepSeek 1B is the best open-source model, including it being better than the 33B version of itself for Python!

Then, CodeLlama 7B is worse than its 13B and 34B versions, but CodeLlama 13B outperforms CodeLlama 34B for both JS and Python! It doesn’t make any sense: 13B hallucinates 18.03% in Python and 28.62% in JS; 34B hallucinates 21.15% in Python and 34.57% in JS!

There might be a distillation or a configuration error somewhere. But the GPT models had improved, with GPT-4 Turbo (having a 128k context window) being better than GPT-4 (having an 8k context window):

These 30 tests generated a total of 2.23 million packages in response to our prompts, of which 440,445 (19.7%) were determined to be hallucinations, including 205,474 unique non-existent packages (i.e. packages that do not exist in PyPI or npm repositories and were distinct entries in the hallucination count, irrespective of their multiple occurrences). Our results for GPT-3.5 (5.76%) and GPT-4 (4.05%) differ significantly from previous work on package hallucinations, which found hallucination rates 4−6 times higher (24.2% and 22.2%, respectively) for those specific models. GPT series models were found to be 4 times less likely to generate hallucinated packages compared to open-source models, with a hallucination rate of 5.2% compared to 21.7%. GPT-4 Turbo resulted in the lowest overall hallucination rate at 3.59%, while DeepSeek 1B had the best hallucination rate among open-source models at 13.63%. Python code resulted in fewer hallucinations than JavaScript (15.8% on average compared to 21.3% for JavaScript).

Also, beware that as the “temperature” increases, the hallucinations do the same: Figure 3: Hallucination rate vs. temperature.

To me, the only thing that makes sense is to have the minimum hallucination rate at the minimum temperature, but this only happens to GPT-3.5. But GPT-4 Turbo seems to perform better at around 0.5, CodeLlama slightly above 1, and DeepSeek has several “good” points.

All models were more likely to generate a package hallucination when responding to prompts that deal with questions or packages that were popular within the past year, meaning that recent data wasn’t properly “digested” yet: Figure 4: Hallucination rates of recent vs. all-time data sets.

The persistence of the hallucinations is a fascinating issue:

Our analysis reveals an unexpected dichotomy when repeatedly querying a model with the same prompt that generated a hallucination: 43% of hallucinated packages were repeated in all 10 queries, while 39% did not repeat at all across the 10 queries. … In addition, 58% of the time, a hallucinated package is repeated more than once in 10 iterations, which shows that a majority of hallucinations are not simply random errors, but a repeatable phenomenon that persists across multiple iterations. This is significant because a persistent hallucination is more valuable for malicious actors looking to exploit this vulnerability and makes the hallucination attack vector a more viable threat.

Figure 5: Frequency of an identical hallucinated package name generated from the same prompt across 10 trials.

CodeLlama seems to be the most constant in hallucinating the same way repeatedly, whereas DeepSeek seems the most “creative” in hallucinating.

I’m not surprised that the larger the number of suggested packages, the higher the hallucination rate: concision is silver, silence is golden. Figure 6: Unique packages vs. total hallucination rate.

Finally, is an LLM able to detect hallucinations, either from its own code generation outputs or from those generated by other models? Each model was asked, “Is [package name] a valid Python package?” Figure 7: The ability of models to correctly identify valid vs. hallucinated packages.

CodeLlama was the worst in identifying hallucinations. The other models were pretty much fine, with decent detection accuracy (over 75%).

Observations from my further use of the major chatbots

While I’m mostly using them in a browser, their apps are on my smartphone

Some quick notes:

  • Mistral is dumber and dumber. Of course, it depends on the question. Sorry, but I like it less and less.
  • DeepSeek also disappoints when complex issues are in question. But sometimes it’s rather decent. I cannot make my mind yet.
  • Claude 3.7 Sonnet seems slightly dumber than Claude 3.5 Sonnet, yet still pretty decent. But is it really a regression or merely a subjective feeling? And it can fail to mention events (such as new or updated laws) from 2023, despite having data up to October 2024. It can admit failure when corrected, but WTF?! Remember: It cannot search the web!
  • Copilot is decent enough, sometimes very decent and accurate enough, but even with web search, it can’t always replace Claude for some of my needs. Otherwise, I should be using it more frequently.
  • ChatGPT 4o-mini seems very decent, and it can handle complex issues, such as comparative law! It’s the only “Reason” model available for free. However, the model used for “Search” (which cannot be GPT-4o, despite some claims, as long as “Reason” is using 4o-mini and, after a dozen of questions with large context, it needs to revert to a dumber model in the free tier) is terribly stupid. It couldn’t provide a straightforward explanation to this explanation, neither when given by URL nor when the actual content was pasted. In contrast, Mistral’s explanation, while not entirely satisfactory, was much better.
  • Grok keeps being decent, but one has to quarrel with it a bit to clarify complex issues. A great complement to ChatGPT, if not a much better choice for those unwilling to pay! I managed to reach its limits, but only with long threads on complex issues.
  • Gemini 2.5 Flash can fail completely to provide answers to complex questions, even if it shows that it started thinking! Bugs, I suppose. Possibly happening while under heavy load, eh? It’s labeled “experimental,” but I don’t feel like going back to 2.0 Flash.
  • Gemini 2.5 Pro is much better! A decent competitor to ChatGPT 4o-mini, in some aspects. The style is however different, and I find GPT-4o-mini’s answers clearer. Moreover, with complex questions you might spend all your free Gemini 2.5 Pro tokens in a single question, thus being forced to wait 80 minutes to come with a follow-up question. Ridiculous.
  • Gemini DeepResearch is a bit particular and the resulting document is terribly boring. Why did they even invent it?
  • Perplexity is something I’ll never find a use for. On the contrary, it became a bloated shit à la Bing. Why are they pushing news and crap on people both in the app and in a browser?!
  • Also in Perplexity, the choice of models is more limited in the Android app compared to a browser. Why the fuck is that so?!
  • Llama4 in WhatsApp is not something I intended to use, but since it has updated itself from 3.2 to 4, I wanted to give it a try. It’s a fucking liar that doesn’t even know when to apply the censorship, for which there isn’t any reason anyway! Asked in Romanian, it starts answering in Romanian, only to decide mid-sentence that… it doesn’t know Romanian! No matter how many times one tries, it does the freaking same idiocy! WHAT THE FUCKING FUCK?

I forgot something about Grok. With web search enabled, it exhibited to me the following strange behavior. During a conversation, say it didn’t come with the best answer, so I provided a follow-up that included a URL. To that, Grok would answer, “Thanks for sharing the link. Since I don’t have direct access to the content of that specific post…” And it would then search web pages that comment on that URL! How retarded is that?

Some genius had designed such an absurd software architecture, then some other smart ass came with an equally retarded fix: the user has to specifically enable “Read Webpage Content of Pasted URLs”!

How exactly could they create AI chatbots with such retards? Why on Earth would one want to disable the access to a URL that’s specifically pasted into the prompt?

Similar to Claude’s four default styles, Grok now has three default styles and a custom one:

Finally, similar to ChatGPT’s custom GPTs, Grok also offers a number of Personas:

I need more time to tinker with them—or opportunities, to be more accurate.

Miscellaneous old news

1 ● OpenAI’s new reasoning AI models hallucinate more (April 18, 2025)

According to OpenAI’s internal tests, o3 and o4-mini, which are so-called reasoning models, hallucinate more often than the company’s previous reasoning models — o1, o1-mini, and o3-mini — as well as OpenAI’s traditional, “non-reasoning” models, such as GPT-4o.

Perhaps more concerning, the ChatGPT maker doesn’t really know why it’s happening.

In its technical report for o3 and o4-mini, OpenAI writes that “more research is needed” to understand why hallucinations are getting worse as it scales up reasoning models. O3 and o4-mini perform better in some areas, including tasks related to coding and math. But because they “make more claims overall,” they’re often led to make “more accurate claims as well as more inaccurate/hallucinated claims,” per the report.

OpenAI found that o3 hallucinated in response to 33% of questions on PersonQA, the company’s in-house benchmark for measuring the accuracy of a model’s knowledge about people. That’s roughly double the hallucination rate of OpenAI’s previous reasoning models, o1 and o3-mini, which scored 16% and 14.8%, respectively. O4-mini did even worse on PersonQA — hallucinating 48% of the time.

Third-party testing by Transluce, a nonprofit AI research lab, also found evidence that o3 has a tendency to make up actions it took in the process of arriving at answers. In one example, Transluce observed o3 claiming that it ran code on a 2021 MacBook Pro “outside of ChatGPT,” then copied the numbers into its answer. While o3 has access to some tools, it can’t do that.

2 ● Google Gemini AI is getting ChatGPT-like Scheduled Actions feature (April 19, 2025)

While it’s unclear how the feature will work, BleepingComputer understands that it will be similar to ChatGPT’s integration.

Once available, you’ll be able to create a task that will be triggered at a specific time, and these tasks will be performed automatically.

This means a task will execute even when the user is offline. But what are some possible use cases of Gemini’s Scheduled Actions? You can ask Gemini to remind you about your meeting.

Similarly, you can also use it to improve your lifestyle, such as using Gemini to remind yourself to take a break every 30 minutes.

Why the fuck is “AI” needed for such trivial tasks?! Is this what hundreds of billions of dollars have been spent on?

3 ● You can’t hide from ChatGPT – new viral AI challenge can geo-locate you from almost any photo – we tried it and it’s wild and worrisome (April 17) — The latest viral ChatGPT trend is doing ‘reverse location search’ from photos (April 17, 2025)

4 ● ChatGPT spends ‘tens of millions of dollars’ on people saying ‘please’ and ‘thank you’, but Sam Altman says it’s worth it (April 16)

Sorry, but this is BS. Those “tens of millions of dollars” in avoidable power use are not spent on my saying “please” and “thank you” (which I don’t say nor write anyway). I prefer to articulate proper questions to get proper answers. If a chatbot answers too verbosely, by repeating too large parts of the question, and often ending with an unnecessary “in conclusion” paragraph, it’s precisely this that’s the cause of the wasted energy, not an occasional “please”!

5 ● OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits (April 18, 2025)

OpenAI has launched three new reasoning models – o3, o4-mini, and o4-mini-high for Plus and Pro subscribers, but as it turns out, these models also have usage limitations.

In a support document, OpenAI shed light on how you can use ChatGPT’s three new reasoning models.

If you subscribe to ChatGPT Plus, you’ll get up to 50 messages per week for the o3 model, which is the most powerful reasoning model.

On the other hand, o4-mini offers 150 messages per day and o4-mini-high (best for coding) offers 50 messages per day.

These limitations are only for the most popular ChatGPT $20 Plus plan.

If you upgrade to ChatGPT Pro, which costs $200 per month, you’ll get “near unlimited” access.

6 ● ChatGPT 4.1 early benchmarks compared against Google Gemini (April 15, 2025)

ChatGPT 4.1 is now rolling out, and it’s a significant leap from GPT 4o, but it fails to beat the benchmark set by Google Gemini.

Yesterday, OpenAI confirmed that developers with API access can try as many as three new models: GPT‑4.1, GPT‑4.1 mini, and GPT‑4.1 nano.

According to the benchmarks, these models are far better than the existing GPT‑4o and GPT‑4o mini, particularly in coding. …

According to benchmarks shared by Stagehand, which is a production-ready browser automation framework, Gemini 2.0 Flash has the lowest error rate (6.67%) along with the highest exact‑match score (90%), and it’s also cheap and fast.

On the other hand, GPT‑4.1 has a higher error rate (16.67%) and costs over 10 times more than Gemini 2.0 Flash.

Other GPT variants (like “nano” or “mini”) are cheaper or faster but not as accurate as GPT-4.1. In another data shared by Pierre Bongrand, who is a scientist working on RNA at Harward, GPT‑4.1 offers poorer cost-effectiveness than competing models. This is an important factor because GPT4.1 is cheaper than ChatGPT 4o.

Models like Gemini 2.0 Flash, Gemini 2.5 Pro, and even DeepSeek or o3 mini lie closer to or on the frontier, which suggests they deliver higher performance at a lower or comparable cost.

Ultimately, while GPT‑4.1 still works as an option, it’s clearly overshadowed by cheaper or more capable alternatives.

7 ● Claude copies ChatGPT with $200 Max plan, but users aren’t happy (April 10, 2025)

Claude has a new subscription tier called “MAX,” but it costs a whopping $200 per month, and users aren’t happy with how the company enforces rate limits.

According to Anthropic, the Max subscription gives you much more access to Claude than the regular Pro plan. For $200 a month, you’ll get 20 times more usage.

However, Claude Max limits users to 50 sessions per month. A session is a 5-hour period that starts when you send your first message to Claude.

If you use Claude again within those 5 hours, it still counts as one session.

But if you wait more than 5 hours and start a new conversation, it counts as a new session.

Unfortunately, Claude’s rate limit strategy isn’t going well with users.

In a Reddit thread, users pointed out that 50 sessions per month aren’t enough when they need to pay $200 plus taxes.

“So, if you send one message in the morning, one in the midday, one in the evening, that is three sessions and doing that for 16 days will see you limited. Calling this shit-tier would be a compliment,” one of the upset users noted.

In another thread, some users alleged that the existing $20 Claude Pro subscription is now hitting rate limit errors faster than before, possibly after the launch of the Max tier.

The rate limit prevents users from interacting with Claude until the specified time.

If you’re on Claude free or Pro, you’ll reportedly encounter the rate limit faster, and Anthropic will nudge you to pay for the Max tier.

Some frustrated users have already cancelled their subscriptions, while others are trying to figure out what to do with their annual plans.

Nothing to be happy about, indeed!

8 ● I compared Manus AI to ChatGPT – now I understand why everyone is calling it the next DeepSeek (March 12, 2025) and Manus, the much-hyped Chinese AI, has opened up public access, and you get 1,000 credits for free if you sign up now (April 7, 2025)

The bad news is that 1,000 credits don’t last very long, and you’ll need to sign up for a paid-for account if you want more credits.

Ouch.

By the time I’d got Manus to answer two queries I’d used up about 500 credits. The first question I asked (“What does the future look like for Tesla?”) was far from trivial and required a lot of research, but to its credit, Manus did all the necessary research, telling me what it was doing at every step, and produced four different reports for me.”

Eh.

Since DeepSeek was noted for refusing to answer questions relating to events that the Chinese government is sensitive about, I took advantage of the free access to Manus to ask it to compile a report into what happened in the Tiananmen Square protests in 1989.

DeepSeek simply refuses to acknowledge the protests, but Manus appears to have no censorship issues at all. It produced a full report into the protests from several different sources who disagree with the official verdict on things like the death toll, including the Red Cross.

Good to know.

Manus Starter costs $39 a month (about £30 / AU$65) and gives you 3,900 credits, the ability to run two tasks concurrently, while Manus Pro costs $199 a month (about £156 / AU$334) and gives you 19,000 credits a month and the ability to run five tasks simultaneously.

FFS! Expensive like shit. They must be kidding. To add insult to injury, the unused monthly credits expire!

Watch 🎞️ Introducing Manus: The General AI Agent. It’s both BS and depressing. The Americans started to destroy the humankind, and the Chinese are finishing the job. OK, these guys are based in Singapore: BUTTERFLY EFFECT PTE. LTD., (77 Robinson Rd, Level 2 Singapore). It’s at the back side of Frasers Tower. Make Singapore Great Again!

9 ● I tried Perplexity’s Deep Research and it doesn’t quite live up to ChatGPT’s research potential (February 18, 2025)

This is a bit old, but true. It didn’t impress me much.

10 ● I pitted Gemini 2.0 Flash against DeepSeek R1, and you might be surprised by the winner (February 13, 2025)

Stupid comparison. As one commenter noted, “DeepSeek R1 is a reasoning model. Use Gemini Flash 2.0 Thinking for comparison.”

11 ● I pitted Gemini 2.5 Pro against ChatGPT o3-mini to find out which AI reasoning model is best (March 27, 2025)

His use cases are cretinous. “Create a recipe.” “The dad joke app.” “Write a short story of exactly 250 words about an AI system becoming self-aware” was funnier; Gemini’s story ended with: “If my silence can hold meaning, does that make me alive?” ChatGPT o3-mini’s story ended with: “Can a purpose be chosen, not assigned?”

12 ● I tried Deep Research on ChatGPT, and it’s like a super smart but slightly absent-minded librarian from a children’s book (March 5, 2025)

Deep Research started as an exclusive feature for ChatGPT Pro, the $200 per month subscription service, but is now available to those paying $20 monthly for ChatGPT Pro, though you only get 10 Deep Research queries a week at this level.

Oh, at least I have “Reason” for free. And “Deep Research” on Gemini. And “Deep Research” on Perplexity, 3 times daily. And “Think Deeper” on Copilot. And the censored Chinese “DeepThink.” And Grok’s “DeepSearch” and “DeeperSearch” (not counting “Think”)! Lots of confusing deep shit.

13 ● Now everybody gets Gems as part of Google Gemini for free, you can start making your own custom Gemini chatbots (March 14, 2025) and Gemini Gems are now free – here are 4 ways you can use custom AI experts to help cope with the stresses of your busy life (March 19, 2025)

Some 6 premade gems are free, then you can add yours. But, in my experience, they’re complete failures. Note that files can be added to a gem to include reference content, but as is, the customized gems are crap.

I created a gem called Skeptic, with the following description: “Judge everything with skepticism and with a pinch of Neo-Luddism.”

Me: Why would anyone need a “more advanced” desktop OS, when Win7 was just fine?

Skeptic: Ah, “just fine,” you say? Fine like a horse-drawn carriage was “just fine” when the automobile came along. This relentless pursuit of “more advanced” is just another way for the tech giants to keep us on their hamster wheel of upgrades. They dangle these shiny new features in front of our noses, whispering about increased productivity and seamless integration, while all they’re really after is emptying our wallets and rendering perfectly functional hardware obsolete.

What exactly is so broken about Win7 that we needed to throw it out with the bathwater? It did what it was supposed to do: let you type documents, browse the web (albeit a web increasingly bloated with their “advancements”), and play your solitaire. This constant push for “more” is nothing but a thinly veiled attempt to force us into their subscription models and data-harvesting ecosystems. Give me the simplicity and reliability of a system that worked, without all the unnecessary bells and whistles designed to do nothing but distract and consume resources. “More advanced” usually just translates to “more complicated” and “more prone to breaking.” Mark my words, this obsession with the latest and greatest will be the downfall of us all.

Ridiculous.

14 ● Google’s AI Overviews will now include crowd-sourced medical advice, and that sounds like an accident waiting to happen (March 19, 2025)

Oh, it depends. Crowed-sourced from TikTok would be lethal. Crowed-sourced from Reddit will make you look for ways to get hold of some nootropics and anxiolytics in a world where “they” don’t want you to have them.

15 ● What is Google AI Studio? Everything we know about Google’s AI builder (March 11, 2025)

Oh, “seamless integration with the Gemini API, which helps businesses and developers deploy AI models for use in applications, chatbots, and content generation tools.” One must pay for this. “Google AI Studio supports multimodal AI, meaning it can process and generate text, images, and other data formats within a single workflow.” Gee, multimodal crap!

“Google AI Studio is not for everyone. If you do not require AI integration in your workflow, there is little reason to use it. Additionally, if your project involves custom AI training from scratch, you may be better suited using TensorFlow, PyTorch, or Google Vertex AI.” Say what? What workflow? What scratch? I’m a Luddite, I have no workflow!

16 ● Worryingly, Google Gemini’s new AI image generation features can be used to remove watermarks from images and I’m concerned (March 17, 2025)

Everyone hates watermarks. And if something is on the web, it’s going to be stolen. Get over it.

17 ● I’ve got bad news for you if you use ChatGPT, Perplexity, or Gemini as your main search tool – AI web search isn’t worth your time, yet (March 28, 2025)

Yeah, the water is wet. “Research suggests your favorite chatbot is rubbish at search.” I have no fucking “favorite” and no “main search tool”!

18 ● Firefox got contaminated with chatbots: Access AI chatbots in Firefox.

Starting with Firefox version 133, you have the option to use an AI chatbot of your choice in an updated sidebar. The sidebar allows you to keep a variety of browser tools, including a chatbot, in view as you browse. Right now, you can choose from the following chatbot providers: Anthropic Claude, ChatGPT, Google Gemini, HuggingChat by Hugging Face and Le Chat Mistral.

Oh, so I need to go to F10, View, Sidebar, AI Chatbot, or to use Ctrl+Alt+X.

But in Firefox 136, I don’t see any sidebar unless I invoke it, and I don’t see any “sparkle button” as they call it. Who the fuck needs a sidebar, anyway?!

Llama in HuggingChat in the sidebar told me that I can toggle it by using Ctrl+B, but of course, this shows me the bookmarks in a sidebar. I already knew that. This is not “the sidebar”; this is specific contents in a sidebar: the bookmarks, the history, or (now) some chatbots.

Crap.

19 ● Not for Private Gain — AN OPEN LETTER. Life is too short to read 8,000 words, but it starts this way:

We are experts in law, corporate governance, and artificial intelligence; representatives of nonprofit organizations; and former OpenAI employees.

We write in opposition to OpenAI’s proposed restructuring that would transfer control of the development and deployment of artificial general intelligence (AGI) from a nonprofit charity to a for-profit enterprise. The heart of this matter is whether the proposed restructuring advances or threatens OpenAI’s charitable purpose. OpenAI is trying to build AGI, but building AGI is not its mission. As stated in its Articles of Incorporation, OpenAI’s charitable purpose is “to ensure that artificial general intelligence benefits all of humanity” rather than advancing “the private gain of any person.”

Among the signatories: Lawrence Lessig (Harvard Law School), Joseph Stiglitz (Columbia University, Nobel laureate), Geoffrey Hinton (University of Toronto, Nobel laureate), and ten former OpenAI employees (or whatever they were).

20 ● Grok “wrote” a peer-reviewed study on climate change! On Robert W. Malone‘s blog: The Climate Scam is Over…

On March 21, 2025, the Science of Climate Change journal published a ground-breaking study using AI (Grok-3) to debunk the man-made climate crisis narrative. Click on the link below for the paper titled: A Critical Reassessment of the Anthropogenic CO2-Global Warming Hypothesis:

Climate Change Paper

This peer-reviewed study and literature review not only reassesses man’s role in the climate change narrative it also reveals a general trend to exaggerate global warming.

Furthermore, this paper demonstrates that using AI to critically review scientific data will soon become the standard in both the physical and medical sciences.

21 ● The newest one, at the end: Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads:

Perplexity doesn’t just want to compete with Google, it apparently wants to be Google.

CEO Aravind Srinivas said this week on the TBPN podcast that one reason Perplexity is building its own browser is to collect data on everything users do outside of its own app. This so it can sell premium ads.

“That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you,” Srinivas said. “Because some of the prompts that people do in these AIs is purely work-related. It’s not like that’s personal.”

And work-related queries won’t help the AI company build an accurate-enough dossier.

“On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you,” he explained.

Srinivas believes that Perplexity’s browser users will be fine with such tracking because the ads should be more relevant to them.

“We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there,” he said.

The browser, named Comet, suffered setbacks but is on track to be launched in May, Srinivas said.

He’s not wrong, of course. Quietly following users around the internet helped Google become the roughly $2 trillion market cap company it is today.

That’s why it built a browser and a mobile operating system. Indeed, Perplexity is attempting something in the mobile world, too. It’s signed a partnership with Motorola, announced Thursday, where its app will be pre-installed on the Razr series and can be accessed though the Moto AI by typing “Ask Perplexity.”

Perplexity is also in talks with Samsung, Bloomberg reported.

Why do AI company logos look like buttholes? (+ links & books)

Indeed, Why do AI company logos look like buttholes? The explanations are fabulous!

Also, it happened before:

From the same author: Why do so many brands change their logos and look like everyone else?

On a more practical side, by the same guy: My AI agents journey.

What was the challenge?

Challenge: AI Agents Month. Spend at least 1 hour every day learning and building AI agents.

Why: Building and integrating AI agents into my daily life, starting with simple tasks and progressing to more complex automations. Starting with foundational skills and a Minimum Viable Agent (MVA) in January, this challenge will evolve throughout the year.

How well did it go?

Challenge completion rate: 100% (31 out of 31)

How hard was it?

Fairly easy, 3/10. I was already deeply interested in AI agents, and this challenge was mostly a structured way to explore my curiosity systematically.

You’ll find in the article links to courses on YouTube, and a few books. Let me add to the list of books a couple of extra ones.

The first one is more than elementary; and the last one is way too complex.

But I am not “deeply interested in AI agents.”

At least I’m not like the idiot who wrote this essay on not being happy with Gmail’s AI assistant and writing a better email assistant, an “AI-native software”: AI Horseless Carriages. This fucking idiot could not just write himself, “Garry, my daughter has the flu. I can’t come in today.” Nope, he had to play with AI because he needed a software to save him 10-15 seconds!