[Dev Catch Up #13] - Google's AI accelerator, hypercomputer, Kubernetes v1.29, Duet AI, and more.
Bringing devs up to speed on the latest dev news from the trends including the launch of Google AI techs, announcements from Kubernetes v1.29, and a bunch of exciting developments and articles.
The evolution of tech with new developments and happenings is constant. And as usual, DevShorts is back with another issue to simplify your digests from the community. Like our last ones, this issue also covers the unique stories trending in our developer circle, along with a look at new open-source projects, tutorials, conference news, and much more.
The evolution and implementation of AI and ML have seen a rapid increase in the post-pandemic era. Tech giants are constantly in the race to produce all sorts of hardware and software that will power the rapidly evolving generative AI models. A few days ago Google announced its TPU v5p AI accelerator and an AI hypercomputer. TPU v5p is the most powerful, scalable, and flexible AI accelerator from Google. TPUs are used for training and serving AI products which includes Google’s most capable Generative AI model Gemini. The AI hypercomputer from Google Cloud is a supercomputer architecture that is packed with an integrated system of performance-optimized hardware, open software, leading ML frameworks, and flexible consumption models. Through the systems-level codesign, AI training, tuning, and serving get a great boost in terms of efficiency and productivity. Learn more about the AI accelerator and hypercomputer from this article published by Google Cloud, where you will get an in-depth explanation along with performance analysis.
Great news for the cloud-native community after KubeCon NA as Kubernetes is back again with the release of its latest version of 1.29. It is the last release of Kubernetes for 2023 and it comes with a bunch of enhancements and new features. A total of 72 enhancements along with some code freezes and deprecations took place in this release. The eventual successor to the ingress API, the Gateway API gets a stable release with v1.0. Init containers with an always restart policy or sidecar containers are there in Kubernetes v1.28 as an alpha version. With Kubernetes v1.29, sidecar containers graduated to beta, and with this, if your pod includes one or more sidecar containers, the Kubelet will delay sending the TERM signal to the sidecars until the last main container gets fully terminated. v1.29 brings the alpha version of WebSockets after deprecating SPDY. Apart from these major changes, there are a bunch of security enhancements including the alpha feature release of ensuring secret pulled images and structured authentication configuration and the beta release of SignedSigning release artifacts alongside the reduction of Secret-based service account tokens. Learn more about the new version release from this blog published by the Kubernetes team where more stable releases of improvements, along with alpha and beta releases have been discussed in detail.
With the rise of generative AI and AI in general, there has been a tremendous rise in organizations using AI to build tools for convenience. Following the trend, Google recently announced its AI-assisted suite of tools for application development named Duet AI, which can help with code completion and generation. It will give direct competition to the already existing GitHub copilot. Unlike GitHub copilot, it can be used in multiple IDEs such as Visual Studio code, JetBrains IDEs, and cloud workspaces. Google has partnered with 25 organizations that have provided datasets for their platform to help and assist developers in building and troubleshooting applications. Duet AI will soon use Google’s most powerful AI model Gemini in the coming weeks. Learn more about Duet AI from this article published by TechCrunch and have a look at it from the official Google Cloud product page here.
GitHub actions got a new overhaul with the release of new actions and features. With this update, the automation tool from GitHub becomes more powerful and versatile. It is built with developer convenience in mind and balances the required extensibility and flexibility to build CI/CD workflows easily and quickly with security and governance offerings along with scalability as per your future needs. Security gets a big boost with GitHub-hosted runners getting more security alongside more secured deployments with new continuous deployment features. Managing workflows centrally is now more simplified with the help of repository rulesets. Also, GitHub actions become more powerful with the addition of Apple silicon-powered M1 macOS runners. Have a more detailed look at these additions and features from this article published by the engineering team at GitHub.
The rise of AI brought its addition to a lot of tech products and Integrated Development Environments (IDEs) are not an exception. AI is being implemented into various IDEs for developer convenience. JetBrains has recently updated its IDE for the Ruby programming language named Rubymine. In the new version 2023.3, you will get improved AI-assistant support which will help you generate unit tests and name generation for your local variables and parameters in your code. Ruby contexts along with the way LLMs analyze the codebase are improved which in return will improve the AI assistant. The new version also comes with code insights for Rails 7.1 strict locals, model controllers, and mailers. There is an update to the Ruby debugger with an update to the type renderers. There are more updates and you can learn more about them from this article published by JetBrains.
Now, we will head over to some of the news and articles that will be at a place of interest for developers and the tech community out there.
Good to know
If you operate a complex distributed system, it is important to note and balance the cost and service availability. It is impossible to have unlimited hardware but very essential to keep the right amount of capacity to serve all users and handle unexpected traffic. Small variations in load can create a degradation in performance and availability. To overcome this, you have to build resilience against overload conditions, and thus come to the concept of load shedding. Load shedding is the intentional dropping of load or delay in requests when a system becomes overloaded to avoid a full meltdown. It keeps the system operational by selectively discarding a small portion of the traffic. This helps in maximizing throughput and minimizing response time to prioritize work during saturation of resources. Know more about load shedding from this article published by codereliant.io where strategies to shed load are discussed in detail along with code examples.
Observability became more interactive with the introduction of OpenTelemetry. Nowadays, it is being accepted as the de facto standard for observability by a number of cloud-native organizations. If you are starting out with cloud-native applications, there’s a high chance you might have heard of and even used the open-source container orchestration platform, Kubernetes. With OpenTelemetry, you can easily collect telemetry data from a Kubernetes cluster. Here’s a full-length guided tutorial from Ruturaj Shitole which explains in detail how to set up OpenTelemetry in a Kubernetes environment.
Architecture patterns are complex but they are an integral part in building any application. Traditional CRUD (Create, Read, Update, Delete) pattern has been there in the systems architecture for many years but it can become less effective when systems tend to scale with time and become complex. This is because CRUD involves handling of reading and writing operations by the same data model and database schema. To address these issues, the CQRS or Command Query Responsibility Segregation pattern has been introduced. With this pattern being used, the commands are responsible for updating a system’s change while the queries are responsible for data retrieval. Have a deep understanding of this pattern from this article published by Pier-Jean Malandrino from Scub Lab where a detailed explanation of the components, benefits, and tradeoffs of the pattern is given.
The open-source project that caught the spotlight with an increasing amount of stars is PyApp. It is a wrapper for Python applications that bootstrap themselves at runtime. With this tool, you can build standalone binaries for every platform that has an extremely configurable runtime behavior. This allows the targeting of different end users. PyApp also provides optional management commands that allow for functionalities like self-updates. Check out this tool from its official GitHub page here and leave a star if you like it.
Lastly, we will take a look at some of the trending scoops that hold a special mention for the community.
CSS has seen a lot of developments throughout 2023 and styling in web pages has always been challenging. Summarizing all of it in a single article will be lengthy but the folks at Google composed a complete blog with interactive elements and updates alongside previews, demos, and source codes.
Repurposing videos for various platforms is more useful than ever these days. It helps in growing social media presence and what better way to do it than using AI which brings efficiency and creativity to this process. Here is an article from 10levelup that gives an in-depth analysis of a bunch of AI-powered video repurposing tools uncovering their strengths and weaknesses.
If you are new to the world of observability, here is an interesting hands-on video for you by Henrik Rexed of Dynatrace from the YouTube channel Is it Observable that explains the collection of logs with the help of open-telemetry. It also discusses the key components and best practices with which you can enhance your cloud-native environment.
With all the hype regarding LLMs in recent days, it will be safe to say that every developer wants to run one on their local machine. This is now possible with the help of Llamafile, which is a single multi-GB file that contains the model weights for an LLM and the code needed to run that model. Here is a very short article on Llamafile from Simon Willison’s blog that tells you more about this project and the commands required to download and run an LLM on your machine.
If you are new to the world of AI and ML, there is an interesting resource for you. This book on deep learning by Chris and Hugh Bishop will give you a comprehensive introduction to several key concepts and architectures in the field of machine learning and is quite handy for newcomers as well as experienced professionals.
That’s it from us with this edition. We hope you are going away with a ton of new information. Lastly, share this newsletter with your colleagues and pals if you find it valuable and a subscription to the newsletter will be awesome if you are reading for the first time.