[Dev Catch Up #43] - Apollo video LLM, Gemini 2.0, lla file explorer, Database migration with Drizzle, Veo-2 video generation model, and much more.
Bringing devs up to speed on the latest dev news from the trends including, a bunch of exciting developments and articles.
Welcome to the 43rd edition of DevShorts, Dev Catch Up!
I write about developer stories and open source, partly from my work and experience interacting with people all over the globe.
Thanks for reading Dev Shorts! Subscribe for free to receive new posts and support my work.
Some recent issues from Dev Catch up:
Join 2500+ developers to hear stories from Open source and technology.
Must Read
With the advancements of LLMs happening throughout the tech world, there is rapid integration of video perception capabilities into Large Multimodal Models (LMMs). However, the underlying mechanisms driving their video understanding remain poorly understood. This led to a study that uncovers the effective driving of video understanding in LLMs and to the introduction of Apollo, a state-of-the-art family of LMMs that achieve superior performance across different model sizes. Learn more about it here.
The tech community was left stunned after the unveiling of Gemini 2.0 last week by Google. With better performance and quality alongside creating real-time vision and audio streaming applications, it has a whole lot of upgrade than its predecessor. Learn more about Gemini 2.0 from the official documentation.
Microsoft is retiring App Centre and CodePush and as a result, many in the react-native community are looking for an alternative for OTA (Over-The-Air) updates. EAS updates are a strong candidate and here is a detailed guide that will show you how to integrate that into a bare React Native app using Fastlane.
All these progress with LLMs owe success with domination to the technique of the scaling of train-time compute. But this technique is no way cost-effective and as a result, there is significant interest in a complimentary technique called the test-time compute training. Know more about it from this article.
After this, shouting out a top open-source project is a delight and here it is:
OSS Highlight of the Week
In this issue, we are focusing on
lla
which is a high performance file explorer written in Rust. It enhances the traditional linux list (ls
) command with modern features, rich formatting options, and a powerful plugin system. Check this awesome project out from its official GitHub page and leave a star to support it.
Now, we will head over to some of the news and articles that will be at a place of interest for developers and the tech community out there.
Hope you are enjoying this edition of our newsletter so far! Support us by giving a free follow to our LinkedIn and X pages.
Your support is highly appreciated!
Good to know
Drizzle ORM is an exceptionally incredible object-relational mapper (ORM). It exposes SQL itself via a thin, strongly typed API which allows you to write complex queries. Here is a guided article that discusses about Drizzle’s database migration feature.
Browser caches a file and stores a copy of it on the user’s device so that during any next visit to the same site, the cached file can be used instead of downloading it again from the server. This article discusses how to improve the page loading time with browser caching.
Node.js have different types of streams and one of them is the readable stream. This article explores the core concepts of readable streams in Node.js.
Long-context LLMs are the reason why numerous downstream applications got enabled. But they posed significant challenges that are related to computational and memory efficiency. These challenges are addressed with the optimizations for long-context inference centred around the KV cache. However, existing benchmarks often evaluate in single-request that neglects the full lifecycle of the KV cache in real-world use. To address this, a new comprehensive benchmark for evaluating long-context methods named SCBench (SharedContextBench) is introduced. Learn all about it here.
Technology Innovation Institute released their latest family of Falcon LLMs. Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters. Check out the models from the huggingface page.
Lastly, we will take a look at some of the trending scoops that hold a special mention for the community.
Notable FYIs
Last week, Google shocked everyone with another announcement apart from the new Gemini. They introduced Veo-2 which is the best video and image generation model that the community has ever seen. Here are more details about the model from the official product page.
CSS evolved a lot in 2024 and there is a lot to cover in this area of frontend developments. Here is the official CSS Wrapped 2024 from Google Chrome.
An SDK is a Software Development Kit, simply meaning that a software that helps in building other softwares. Whether you are a novice or a professional, this article on SDK will fancy you as it covers all on it.
Terminal functions with the combination of the job done by operating system, shell, terminal emulator, and the program that happens to be running. This program follows a certain set of rules and this article by Julia Evans sums up all of it.
That’s it from us with this edition. We hope you are going away with a ton of new information. Lastly, share this newsletter with your colleagues and pals if you find it valuable and a subscription to the newsletter will be awesome if you are reading for the first time.