[Dev Catch Up #32] - Qwen2.5, xLAM-1b, TanStack Router, Nvidia's Minitron SLMs, React Compiler, and much more.
Bringing devs up to speed on the latest dev news from the trends including, a bunch of exciting developments and articles.
Welcome to the 32nd edition of DevShorts, Dev Catch Up!
I write about developer stories and open source, partly from my work and experience interacting with people all over the globe.
Some recent issues from Dev Catch up:
Join 1000+ developers to hear stories from Open source and technology.
Must Read
Alibaba recently launched Qwen2.5 LLM and claimed it to be the largest open-source release in history with specialized models for coding and mathematics. Trained on 18 trillion tokens, this open-weight model family comes in various sizes. Learn all about the model from this article.
TanStack Router, a fully-featured client-side JavaScript application framework is the new hot thing in frontend technology. Know everything about TanStack Router from this article.
Performance issues in multi-tenant systems are very common with a container or system heavily utilizing server’s resources and causing performance degradation in adjacent containers. This problem is termed as “noisy neighbor problem” and learn how Netflix’s engineering team detects and solves it with ebpf from this article.
Now, we will head over to some of the news and articles that will be at a place of interest for developers and the tech community out there.
Good to know
Nvidia dropped Minitron, a family of small language models (SLMs) obtained through pruning and knowledge distillation. Learn more about minitron models from the official GitHub repository and check out the models from here.
Rubra AI released Phi-3 mini to add function calling capability. It is trained in phases and the parameters expanded from 3.8B to 4.7B. Learn more about the model from here.
xLAM-1B is a tiny model dropped by Salesforce and it is crushing the leaderboard by surpassing GPT 3.5 in function calling. More information on this model is available in the official huggingface model page.
React compiler is being tested as an experimental feature by the release team to make it generally available with the launch of React v19.0.0. It automatically memorizes code to optimize applications. Learn how to use it from this guided tutorial.
Lastly, we will take a look at some of the trending scoops that hold a special mention for the community.
Notable FYIs
Automate the process of organizing files intelligently with Local File Organizer. It takes the help of powerful AI models which includes language models and vision language models to organize files intelligently. Learn more about this tool here.
Making HTTP requests in Node.js is possible with built-in browser-familiar Fetch API and several third party libraries. This article lists the top 5 HTTP request libraries for Node.js and discusses its pros and cons.
MDX is a content format that combines markdown with JSX and using it in Next.js is complicated because of the configuration that needs to be done with MDX. This article shows how to set up MDX in a Next.js Router application.
Get Perplexity-AI like experience from Curiosity. Created with the help of LangGraph, FastHTML, and Tavily, it supports different models like gpt-4o-mini, llama3-groq-8b-8192, etc. Check it out here.
GitHub Copilot improves developer productivity with its AI capabilities. Here is an article showing a preview of the platform while equipped with OpenAI’s o1 models designed to solve hard and complex problems with advanced reasoning capabilities.
That’s it from us with this edition. We hope you are going away with a ton of new information. Lastly, share this newsletter with your colleagues and pals if you find it valuable and a subscription to the newsletter will be awesome if you are reading for the first time.