Discover more from Dev Shorts
Dev Catch Up - #7
Bringing devs up to speed on the latest dev news from the last week
Evolution of tech with new developments and happenings is constant. And as usual, DevShorts is back with another issue to simplify your digests from the community. Like our last ones, this issue also covers the unique stories trending in our developer circle, along with a look at new open-source projects, tutorials, conference news, and much more.
Thanks for reading Dev Shorts! Subscribe for free to receive new posts and support my work.
Machine learning and artificial intelligence have a fair share in the major technology developments of 2023. Large Language Models or LLMs are increasingly getting popular and applications using LLMs are exploding into the scene. LLMs can be termed as black boxes that produce nondeterministic outputs and you cannot test or debug them using traditional software engineering techniques. As a result, when applications using LLMs are coming to production, there has been a trend to the introduction of reliability and predictability problems that can cause a lot of inconvenience for the end-users. Observability is a solution to these problems and you need to have an observability-driven development approach while developing applications using LLMs. This article from Honeycomb explains in detail why observability-driven development is necessary with LLMs along with its advantages.
Large Language Models are among the hot trends this year and big tech companies are not shying away from creating enterprise based solutions with the help of it. GitHub Copilot is a successful example of an enterprise based LLM solution that is currently in action. From idea to production, it has gone through three stages and then launched to the general public. First, they have found an impactful problem space for an LLM application. Then, they went on creating a smooth AI experience and lastly, the LLM application got ready and usable for general availability. The entire journey of GitHub Copilot is documented in this article published from GitHub where they have discussed their lessons from creating this successful AI based LLM solution.
Many of you have heard about data modelling but very few of you have heard about Query-driven data modelling. It is a data modelling technique that involves building of tables and data-sets in reaction to a single stakeholder’s request. There are requirements gatherings but it is only directed for a single stakeholder. Nowadays, many companies no longer perform conceptual or logical data modelling, rather they opt for developing queries to support a one-off request. Hence, without forming any sort of star schema or building a data mart or warehouse, we are building tables to support a single request in the form of Query-driven data modelling. This article from Seattledataguy discusses what Query-driven data modelling is along with its benefits, disadvantages, significance, and place in the modern world of data engineering.
In recent news, AWS recently launched Amazon Elastic Compute Cloud (EC2) capacity Blocks for ML. This service will allow customers to buy access for using Nvidia GPUs for a limited amount of time. It will help interested customers to reserve and schedule GPUs for the amount of time they want to use in future dates. Prices of the resources will vary depending on the supply and demand for the GPUs. More information about this new AWS service can be found on this report from Techcrunch where they talk about this service in detail with its availability in specific AWS regions.
Now, we will head over to some of the news and articles that will be at a place of interest for developers and the tech community out there.
Good to know
In the world of SRE, if you work with incidents or just starting out, you might have come across the term “post-mortem”. A seasoned incident responder might have mixed feelings with this term. A post-mortem document is the space where information is gathered about an incident after it has been closed out. The end goal of these documents is to find out the key factors behind the incidents and reduce or prevent the impact of such incidents in the future. While some say these are essential for incidents of all severities, some only vouch for them for the worst case incidents. Regardless, this article from Incident.io tells the importance of post-mortem documents and how it distributes knowledge across your organisation to build resilient products and processes.
To build a fault-tolerant system, you need to follow software design principles that will adhere to your goals. One such design principle is the fail fast principle. It is a design pattern that reports any exception in an application rather than continuing on its execution. With immediately detecting and propagating failures, it aims to prevent localised faults from cascading across system components. Codereliant.io presents an excellent article on this design pattern that discusses its advantages with practical examples and best practices.
OpenTelemetry stories are a constant these days and this edition is no exception. In recent times, OpenTelemetry or OTel is the de facto standard for instrumentation, telemetry generation, and collection. While it is generally used for observing applications, you can also use it to observe your running infrastructure like a Kubernetes cluster. Reese Lee from New Relic has penned down an excellent article that shows how you can monitor a Kubernetes cluster with the help of OpenTelemetry. It also discusses the OpenTelemetry collector components for Kubernetes and gives you an idea on how you can build a data pipeline.
An open-source tool from PyTorch is stealing the limelight. With AI and ML trending, this project has garnered quite some attention and earned a few stars in GitHub. ExecuTorch is a part of the PyTorch edge ecosystem and is an end-to-end solution that enables on-device inference on mobile and edge devices including wearables, embedded devices, and microcontrollers. It efficiently deploys PyTorch models to edge devices. While providing compatibility with a plethora of computing platforms, it can fully utilise hardware capabilities such as CPUs, NPUs, and DSPs. You can check it from its official GitHub page here and leave a star.
Lastly, we will take a look at some of the trending scoops that hold a special mention for the community.
OpenTelemtery doubts and best practices are discussed widely in this hour-long Q&A session arranged by Hazel Weakly, where she talked about implementing and optimising OpenTelemetry practices within an organisation.
If you are familiar with cloud computing or cloud in general, you must have heard about Object stores. It is a data storing strategy that uses objects to store data. This short article from ByteByteGo discusses the six use cases of Object stores.
In the latest P99 conf, many speakers have openly called out the problems that P99 possesses. They challenged the value of P99 latencies and discussed the alternatives along with the problems. Here is a detailed article from Thenewstack on this issue.
OpenAI has brought an AI storm with its conversational generative AI assistant but it is a lot more vulnerable than you think. Alex Kantrowitz from Big Technology discusses some of the points in this article that sheds light on the vulnerabilities of OpenAI in detail.
API testing confirms the working efficiency of an API and is something that is quite useful to software engineers. Here is a short article from ByteByteGo that explains the nine types of API testing in layman terms.
That’s it from us with this edition. We hope you are going away with a ton of new information. Lastly, share this newsletter with your colleagues and pals if you find it valuable and a subscription to the newsletter will be awesome if you are reading for the first time.