Top Tweets
Top 20 tweets from the selected 200 research KOLs, scored by engagement signals.
- 01Andrej Karpathy@karpathy·Apr 4, 04:45 PMQuote
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: https://gist.gi...
Andrej Karpathy@karpathyLLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipula...
- 02Andrej Karpathy@karpathy·Apr 4, 11:28 PMQuote
Farzapedia, personal wikipedia of Farza, good example following my Wiki LLM tweet. I really like this approach to personalization in a number of ways, compared to "status quo" of an AI that allegedly gets better the more you use it or something: 1. Explicit. The memory artifact is explicit and navigable (the wiki), you can see exactly what the AI does and does not know and you can inspect and m...
Farza 🇵🇰🇺🇸@FarzaTVThis is Farzapedia. I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me. It made 400 detailed articles for my friends, my startups, research areas, and even my favori...
- 03xAI@xai·Apr 3, 07:31 PM
Introducing Quality mode on Grok Imagine – powered by our most advanced image generation model. Quality mode gives you enhanced details, stronger text rendering, and higher levels of creative control. Now available on web and mobile. Try it at http://grok.com/imagine https://t.co/3h41VNKgWM
- 04Claude@claudeai·Apr 3, 03:17 PM
Microsoft 365 connectors are now available on every Claude plan. Connect Outlook, OneDrive, and SharePoint to bring your email, docs, and files into the conversation. Get started here: https://claude.ai/customize/connectors https://t.co/sOrigP41FJ
- 05Julien Chaumond@julien_c·Apr 2, 04:45 PM
Just do this: brew install llama.cpp --HEAD Then; llama-server -hf ggml-org/gemma-4-26B-A4B-it-GGUF:Q4_K_M https://t.co/wApTXYBfah
- 06Sundar Pichai@sundarpichai·Mar 31, 05:13 PM
2004 was a good year, but your Gmail address doesn't need to be stuck in it. To say goodbye to v0t3f0rp3dr02004@gmail.com or mrbrightside416@gmail.com (or whatever you were into at the time), go to your Google Account settings and choose any name available. You'll keep your old username and you can sign in with both.
- 07GREG ISENBERG@gregisenberg·Mar 31, 11:38 PM
sequoia put out a blog post called "services is the new software" look at this map of over $1T in services being replaced by AI agents https://t.co/aFmDGhysfl
- 08Google DeepMind@GoogleDeepMind·Apr 2, 04:03 PM
Meet Gemma 4: our new family of open models you can run on your own hardware. Built for advanced reasoning and agentic workflows, we’re releasing them under an Apache 2.0 license. Here’s what’s new 🧵
- 09will depue@willdepue·Apr 1, 07:15 AM
和我的许多朋友一样,我在 2026 年伊始就立下了一个目标:今年要变得更加中国人,而我也打算真正把它落实下去。在 OpenAI 工作将近三年后,我决定离开,去追求新的机会。 经过深思熟虑,我很高兴地宣布,从今天起我将搬到杭州,加入 DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd,致力于构建中国式 AGI。 我实在无法拒绝这个让我成为亿万富翁(人民币)的机会,而深入布局、确保自己在未来半导体供应链中的控制力的机会,同样让我无法放弃。作为 DeepSeek 的首位多元化招聘员工,我已被承诺将获得 1024 块“走私”来的 B200。 来自杭州与中国香港特别行政区的问候。四月快乐。
- 10Claude@claudeai·Apr 2, 10:46 PMQuote
Computer use in Claude Cowork and Claude Code Desktop is now available on Windows.
Claude@claudeaiYou can now enable Claude to use your computer to complete tasks. It opens your apps, navigates your browser, fills in spreadsheets—anything you'd do sitting at your desk. Research preview in Claude Cowork and Claude Code, macOS only. ht...
- 11will depue@willdepue·Apr 5, 02:46 AM
you guys really went easy on Project Hail Mary. just arrival for retards. interstellar for chuds
- 12Deedy@deedydas·Apr 6, 04:22 PM
Meta Harnesses is Autoresearch on steroids. Something I've been exploring recently is to get long running agents to hill climb on a verifiable task to continuously improve without my intervention. Karpathy's Autoresearch did this pretty well on specific tasks, but this weekend I tried Meta Harnesses which moves one level of abstraction up. What does Meta Harness do? Autoresearch can be used in ...
- 13Jerry Liu@jerryjliu0·Apr 3, 07:49 PMQuote
This is a cool article that shows how to *actually* make filesystems + grep replace a naive RAG implementation. ̶F̶i̶l̶e̶s̶y̶s̶t̶e̶m̶s̶ ̶+̶ ̶g̶r̶e̶p̶ ̶i̶s̶ ̶a̶l̶l̶ ̶y̶o̶u̶ ̶n̶e̶e̶d̶ ̶ Database + virtual filesystem abstraction + grep is all you need https://t.co/Qto548vl11
Dens Sumesh@densumeshhttp://x.com/i/article/2039508550356238336
- 14Andrej Karpathy@karpathy·Apr 4, 09:57 PMQuote
Something I've been thinking about - I am bullish on people (empowered by AI) increasing the visibility, legibility and accountability of their governments. Historically, it is the governments that act to make society legible (e.g. "Seeing like a state" is the common reference), but with AI, society can dramatically improve its ability to do this in reverse. Government accountability has not be...
Harry Rushworth@HrushworthThe British Government is a complicated beast. Dozens of departments, hundreds of public bodies, more corporations than one can count... Such is its complexity that there isn't an org chart for it. Well, there wasn't... Introducing ⚙️Mac...
- 15Demis Hassabis@demishassabis·Apr 2, 04:08 PM
Excited to launch Gemma 4: the best open models in the world for their respective sizes. Available in 4 sizes that can be fine-tuned for your specific task: 31B dense for great raw performance, 26B MoE for low latency, and effective 2B & 4B for edge device use - happy building! https://t.co/Sjbe3ph8xr
- 16Deedy@deedydas·Apr 3, 03:06 PM
This is the best blog post on LLM inference I've seen this year. They achieved 10x latency and >1400 tokens/sec by moving speculative decode onto two 2GB SRAM/chip Corsairs, a small cost on top of a standard GPU setup on gpt-oss-120b. This performance at this price is insane. https://t.co/5JQU1dkmCY
- 17Unsloth AI@UnslothAI·Apr 6, 03:34 PM
You can now train and run 500+ models in our free notebook!✨ GitHub repo: https://github.com/unslothai/unsloth Colab Notebook: https://colab.research.google.com/github/unslothai/unsloth/blob/main/studio/Unsloth_Studio_Colab.ipynb https://t.co/abXXrovIvG
- 18Lee Robinson@leerob·Apr 3, 06:08 PM
Life update... daughter #2 has arrived 🌸🌹 https://t.co/S7vvNRUVJq
- 19Unsloth AI@UnslothAI·Apr 3, 08:16 PM
Gemma 4 E4B (4-bit) completed a full repo audit by executing Bash code and tool calls locally. Runs on just 6GB RAM. https://t.co/ugeLkXfv8v
- 20Unsloth AI@UnslothAI·Apr 2, 07:46 PMQuote
Gemma 4 E4B was able to search and cite 10+ websites, execute code to find the best answer! 🔥 You only need 6GB RAM to try this in Unsloth Studio. GitHub repo: https://github.com/unslothai/unsloth https://t.co/tWWkyaPaqz
Unsloth AI@UnslothAIGoogle releases Gemma 4. ✨ Gemma 4 introduces 4 models: E2B, E4B, 26B-A4B, 31B. The multimodal reasoning models are under Apache 2.0. Run E2B and E4B on ~6GB RAM, and on phones. Run 26B-A4B and 31B on ~18GB. GGUFs: https://huggingface.co...