[Dev Catch Up #17] - Devin - the AI Dev, Meta's Gen AI architecture, Blackwell AI chips, and more.
Bringing devs up to speed on the latest dev news from the trends including, a bunch of exciting developments and articles.
Welcome to the 18th edition of DevShorts, Dev Catch Up!
I write about developer stories and open source, partly from my work and experience interacting with people all over the globe.
Some recent issues from Dev Catch up:
Join 1000+ developers to hear stories from Open source and technology.
Must Read
“AI is going to take our jobs” - This statement is gaining traction in the media after the introduction of an AI software engineer, the first of its kind in tech and in the world as well. Developed by Cognition Labs, the automated AI software engineer Devin can work alongside engineers or independently complete tasks for other engineers to review. With its high executional and performance capability, Devin can plan and execute complex engineering tasks like writing and editing code, create projects, fix bugs, etc., that requires thousands of decisions. It can recall contexts at every stage, learn over time, and fix its mistakes. The assistant can do active collaboration with users and report on its progress over a task periodically. With expertise on using unfamiliar technologies, Devin can build or deploy applications end-to-end, autonomously detect and fix bugs in codebases, train its own AI models, and much more. To know more about Devin, check out this article published from the engineering team of Cognition Labs, the creator of Devin.
With all the developments in the field of AI, it is certain that investments on hardware infrastructure are crucial for developing AI for the future. It will play a leading role as better hardware infrastructure will help in extracting higher throughput and reliability for various AI workloads and this is exactly where Meta is focusing at the moment. Recently, they shared details on two versions of their 24,576-GPU data centre scale cluster. These clusters are designed to support their current and next generation AI models like Llama 3. Furthermore, these clusters help in AI research and development across GenAI and other areas. Learn more about these data centre scale clusters and GenAI infrastructure from this article published by the engineering team at Meta where they talked about the design, compute speed, performance of the cluster in detail.
With the growth of AI, the demand for better chips for powering the LLMs has just got bigger with better configurations. Catering to these rising demands, tech giants like Nvidia are in no mood to slow down in the race for creating the next big thing in the world of GPUs. In their developer conference, the company announced a new generation of Artificial Intelligence chips and software for running AI models. Nvidia unveiled their Blackwell AI chips with massive performance upgrades which are expected to ship later this year. They also introduced the addition of Nvidia inference microservice or NIM in their enterprise software subscription. Nvidia aims to run all models along with their GPUs with the help of NIM. The goal is to help companies that own older Nvidia GPUs use it easily for inference and allow them to continue to use their owned GPUs for the future. This article from CNBC talks extensively on the Blackwell AI chips along with the newly added NIM software.
If you are familiar with data structures and algorithms, you might have known about the concept of graphs in programming languages. As a developer, you come across them often and can use it to analyze all sorts of systems. But if you have noticed carefully, there is almost no graph support in mainstream languages. This is because of the fact that there are too many different kinds of graphs and representation of each type of graphs and graph algorithms. Because of too many tradeoffs, designs, and maintenance burdens, languages don’t support graphs in their standard libraries. Moreover graph algorithms are very sensitive to graph representation and implementation details. As a result, programmers avoid third-party graph libraries because of its limitations and slow nature. Learn more about graph algorithms and its nitty-gritties from this article published by Hill Wayne where he talked about graphs, their implementation, and representation in detail.
Now, we will head over to some of the news and articles that will be at a place of interest for developers and the tech community out there.
Good to know
If you are into software development, you might heard of something called the staging environment. It is a pre-production environment that resembles almost identically with the production environment but essentially and exclusively used for testing purposes. Code and feature changes are tested in this environment frequently to validate them before deploying to production. This mirroring of the production environment as controlled and regulated helps testers and developers perform various tests as closely as possible. Read more about staging environment from this article published by the engineering team of Graphite, where they discussed the importance of the environment as a whole and how it forms a bridge between development and production.
The open-source tool for this week is a query language named GritQL developed by Grit. It is a declarative query language for searching and modifying source code. It uses Rust and query optimization which helps scale it up to over 10 million line repositories. The language’s built-in module system has over 200 standard patterns and can be used to write any target languages like JavaScript, Python, Java, etc. Check GritQl from its repository here and leave a star to show them some love.
Lastly, we will take a look at some of the trending scoops that hold a special mention for the community.
Notable FYIs
AI models are making advancements in every sector to make life easier for the general public. The inclusion of AI in the field of music is not a new addition but an experiment that is going on for quite some time. There is an AI model introduced by Suno AI that can create music from text prompts. The model can create music by itself, while it uses ChatGPT for generating the lyrics and the title. The newest model from the company is yet to release for general availability. Learn more about this model from this article published by Gadget360, where they discussed shortly on the model with the inclusion of the link of the example of a track generated by the AI model.
The AI race has come a long way but it has miles to go. While the developments in AI is just the beginning, we can unanimously agree that OpenAI is one of the leaders in this race. While we expected tech giants like Google with all of its muscles and funding on AI development to sweep away in the race, they failed to create LLMs among their competition as an example. This podcast from Latent Space includes David Luan who led Google’s LLM efforts and co-led Google Brain talks in detail why the company failed to take the first stride in the race and how multimodal agents are the path to Artificial General Intelligence or AGI.
Redis now has a new dual source-available licensing that will enable Redis to sustainably provide permissible use of its source-code and allow it to be available to developers, customers, and partners through Redis community edition while the company heads to the next phase of development as a real-time data platform with its own unique set of product offerings, clients, and tools. Read more about this development from this article published in the official Redis blog, where more information has been shared about the licensing detail.
That’s it from us with this edition. We hope you are going away with a ton of new information. Lastly, share this newsletter with your colleagues and pals if you find it valuable and a subscription to the newsletter will be awesome if you are reading for the first time.