Chicken thinking: "Someone please explain this guy how we solve the Schroëdinger equation"
I use jan-beta GUI, you can use locally any model that supports tool calling like qwen3-30B or jan-nano. You can download and install MCP servers (from, say, mcp.so) that serve different tools for the model to use, like web search, deep research, web scrapping, download or summarize videos, etc there are hundreds of MCP servers for different use cases.
Your toxic partner: "What were you doing at that cafe at 5:42 PM"
Twelve years ago Moto X was launched by Motorola, at that time controlled by Google. I had it and at any moment you could say "Hello Google, what time is it?" and it responded. I was constantly listening. All the time. And it was a perfectly normal phone regarding battery life or data usage. TWELVE years ago, imagine how much easier would be to implement that now, with more powerful and efficient chips and bigger batteries.
From an article about Moto X back then: "If you want to take a selfie, you should be able to simply say “Take a selfie!” In short, your smartphone should live up to its name. That’s the goal with the Moto Voice and Moto Assist software integrated into the second generation Moto X smartphone. And to do that, the Moto X is always listening, for verbal commands from the user and also ambient cues of the context. That emergent behavior is spawned by complex interactions between the software and hardware"
Only much latter I came to the conclusion that with Moto X Google was making its first tests on using the microphone for mass surveillance.
I would say most funny... but the worst because you have to read the whole paper to know what it is about.
How many years until they run out of characters?
It's about time it's also the title of one of the best introductory books on relativity by David Mermin. Specially for those who aren't good at math but want to understand it beyond popular science books.
Yes, It is MRI... Man Roughly Incinerated.
It refers to the ability to locate the source of a sound. It supposes that the localization is achieved only by the difference of the time of arrival of the sound to both ears. That's why the curve is a hyperbola, which is the set of points which distances to the foci (ears) have the same difference, so you couldn't differentiate which of all the points in the hyperbola is the actual source (confusion). But this is too simplistic, the auditory system is much more sophisticated and the source can be localized by other means.
It's totally doable because they are real people.
Unbeatable running multiple threads
When a physicist want to impress a mathematician he explains how he tames infinities with renormalization.