[Dev Catch Up # 105] - Gemma 4, Qwen3.6 Plus, Microsoft's MAI Models, GLM 5V Turbo, Semiotic, Claude Code's Source Code Leaked, Claude Code tips, Video chat with your agents with this Skill & more!
Bringing devs up to speed on the latest dev news from the trends including, a bunch of exciting developments and articles
Welcome to the 105th edition of DevShorts, Dev Catch Up.
For those who joined recently or are reading Dev Catch Up for the first time, I write about developer stories and open source, partly based on my work and experience interacting with people all over the globe.
Thanks for reading Dev Shorts! Subscribe for free to receive new posts and support my work.
Some recent issues from Dev Catch up:
Join 8800+ developers to hear stories from Open source and technology.
Must Read
Google has released Gemma 4. It is a new open model family. It is built from the same research and technology behind Gemini 3. Google says it supports agent workflows, multimodal reasoning, and 140 languages. Check Google DeepMind’s post for more details
Microsoft has announced three new MAI models in Foundry. They are MAI Transcribe 1, MAI Voice 1, and MAI Image 2. These cover speech to text, voice generation, and image generation. The models focus on speed, quality, and lower cost. Check Microsoft’s announcement for more details.
Qwen has introduced Qwen3.6 Plus. It comes with better coding, stronger multimodal vision, and a 1M context window in the API. Qwen says it is built to support real world agents and developer workflows. Check Qwen’s blog for more details.
Anthropic accidentally exposed Claude Code source code through a map file in its npm package. The leaked code then spread across GitHub through multiple repos and forks. Anthropic sent takedown notices to remove those leaked copies. Check the TechCrunch report for more details.
OSS Highlight of the Week
This week we are featuring Semiotic. It is a React library for building data visualizations in web apps. It comes with schemas and an MCP server, so AI coding assistants can generate correct chart code on the first try. Check the GitHub repo for more details.
Good to know
Claude Code now has a No Flicker mode. It reduces screen flicker and keeps memory usage stable in long chats. It also adds mouse support in the terminal. Check Anthropic’s announcement for more details.
Prism ML has introduced 1 bit Bonsai 8B. It is a model with 1 bit weights and needs only 1.15 GB of memory. Prism says it is built for robotics, real time agents, and edge use cases. Check Prism ML’s announcement for more details.
Anthropic has shared new research on emotion concepts in language models. The research says models can form internal emotion related representations from human text. Anthropic says these patterns can influence how Claude behaves in some cases. Check Anthropic’s research post for more details.
AI has helped a solo founder build a $1.8B business. Medvi is one reported example built with AI tools for coding, ads, and support. It shows how a very small team can now scale much faster with AI. Check the New York Times report for more details.
Andrej Karpathy shared a workflow for building personal knowledge bases with LLMs. He uses them to turn papers, articles, repos, and images into a markdown wiki. The wiki then keeps growing through search, Q and A, and new outputs. Check Karpathy’s post for more details.
Notable FYIs
Pika has released a beta video chat skill for agents. It is powered by a new real time model called PikaStream 1.0. Pika says it keeps memory and personality during calls and can also support agentic tasks in the same session. Check Pika’s announcement for more details.
Boris Cherny shared practical Claude Code tips from his daily workflow. He showed features like teleport, remote control, loop, schedule, hooks, and Dispatch. These help Claude Code run across devices and automate more work. Check Boris Cherny’s thread for more details.
Qwen has released Qwen3.5 Omni. It is a native multimodal model that can understand text, images, audio, and video. Qwen says it brings stronger multilingual support and better audio and video understanding. Check Qwen’s blog for more details
Falcon has released Falcon Perception. It is a vision model for referring expression segmentation. Along with it, they also released Falcon OCR, a 0.3B OCR model. Falcon says it performs on par with much larger models. Check the Falcon’s post for more details.
Z.ai has introduced GLM 5V Turbo, a vision model built for coding. It can understand images, video, text, and files as input. Z.ai says it is built for multimodal coding tasks and can work with agents like OpenClaw. Check Z.ai’s docs for more details
That’s it from us with this edition. We hope you are going away with a ton of new information. Lastly, share this newsletter with your colleagues and pals if you find it valuable. A subscription to the newsletter will be awesome if you are reading it for the first time.


