[Dev Catch up #9] - After effects of OpenAI Saga, Dev Day and other updates
Bringing devs up to speed on the latest dev news from the last week
I think we just had the most exciting week in the tech, as the person (Sam Altman) who led the AI-industry revolution in the recent past has been outsted from a leadership postion. I want to write a very detailed opinion on the entire story. But for now, there is one such amazing article which illustrates the complexity of the situation!
The after effects of OpenAI Saga will be drastic as people are unsure of Customer Support, future commercial roadmap. For good or bad, GPT is the god-model that most companies use, including the likes of Salesforce.
Over to this week’s update.
Must Reads
OpenAI just concluded their first annual developer conference a two weeks ago and it was filled with announcements concerning the launch of new models and developer products. Among dozens of new additions and improvements, the most prominent is the launch of the GPT4 Turbo model. It is cheaper, more capable than the previous models with an ability to support a 128k context window. With this release, pricing has been reduced to many parts of OpenAI’s platform. Apart from this, OpenAI unveiled their Assistant API that will help developers and makes it easy for them to build assistive AI apps that have goals and can call models and tools. Also, new multimodal capabilities are introduced in the OpenAI platform which includes vision, image creation with DALL.E3, and text-to-speech (TTS). Get more insights on these launches and additions in detail from this article posted by OpenAI, where these new models and launched developer products are explicitly explained.
The recent years brought a revolution in the world of ML and AI. Developers are more interested than ever in using these technologies to build tools and products. Large Language Models or LLMs are such AI models trained on huge datasets of texts and code that developers are eyeing to build enterprise tools and applications. But figuring out the architecture along with the steps to create your first LLM application is kinda challenging. This article from GitHub explains everything a developer needs to know while creating their first application using LLM. Developers can explore the problem spaces discussed in this article along with the emerging architecture of LLM applications. The article also covers the real world impact of LLMs.
Recently, Elastic has decided to extend the Elastic AI Assistant so that it can provide context aware insights for observability. Elastic AI is an open generative AI assistant powered by the Elasticsearch relevance engine. With transforming problem identification, resolution, and eliminating the need for manual data chasing across silos, the AI Assistant will now enhance the understanding of application errors, log message interpretation, alert analysis and suggestions for optimal code efficiency. Additionally, it has an interactive chat interface that will allow SREs to chat and visualise all relevant telemetry cohesive in one place. Moreover it will also help leverage proprietary data and runbooks for additional contexts. You can also configure the LLM of your choice with the AI Assistant and provide private information to it. Learn more about the AI Assistant from this explanatory article posted by Elastic where they explained the features in detail along with demo videos and graphics.
Most programmers are familiar with CPUs and sequential programming because they have grown up writing code for CPUs but not many programmers know about GPU computing. Engineers nowadays should have a basic understanding of GPU because of its extensive use in deep learning. At the cost of medium to high instruction latency, GPUs are designed for massive levels of parallelism and high throughput. Here is an article from Abhinav Upadhyay posted with code confessions that delves into the inner workings of a GPU and how it operates at a detailed level along with its difference with a CPU.
If you have a business and you are planning to operate your workloads in multiple regions of the globe, you will face a whole set of challenges. Orchestrating deployments between regions is difficult because a new version of your application might get deployed in one region correctly while it may fail in the other regions. Securing traffic between your components, managing the networking layer so that your components have seamless communication, outage of distributed system over low latency, etc., are some of the problems that you can face. All of these complexities multiply the failure scenarios for you to consider. The Koyeb team has put together an article where they have deep dived into the architecture of a serverless engine and a multi-region networking layer and has talked about how they have built their global deployment engine.
Now, we will head over to some of the news and articles that will be at a place of interest for developers and the tech community out there.
Good to know
In software development, memory leaks are a common and frustrating problem. It refers to a type of memory consumption by a computer program where the program fails to release the non-needed and non-used memory. The phenomenon arises when the application doesn't use the objects in the memory after a certain time but fails to return the allocated memory to the operating system or the memory pool. It leads to certain unintended consequences like increased memory usage and paging, out of memory errors, application instability, etc. which in turn causes a degradation in system performance through sluggish behaviour, crashing, or freezing. Know more about memory leaks from this article by Codereliant.io where they gave detailed explanation on memory leaking and its causes.
As software development is evolving, automation is becoming a big part in modern day development. Data collection from websites is a tedious task but with the help of automation, it gets done smoothly. The process is called web scraping. It is a way to collect publicly available web data with code. The execution is done programmatically through scripts and code automating the entire data collecting process. The stored data can be used for analysis in future. This detailed article from Technically written by Justin talks about web scraping and its significance.
Observation with OpenTelemetry is the newfound revolution and is always in the trends. Testing with OpenTelemetry instrumented applications is getting better results. Tracetest is a trace-based testing tool based on OpenTelemetry and using the distributed traces from OpenTelemetry, it ensures that your application is on par with the test specs. Here is an article from Tracetest that shows seamless integration of Tracetest with a Node.js application and how using OpenTelemetry traces enhances the end-to-end and integration testing.
An open-source project that is catching a few eyes and stars in GitHub is OpenAPI DevTools. It is a chrome extension that generates OpenAPI specification in real time from network requests. During usage, it adds a new tab to chrome dev tools that automatically converts network requests into a specification. Just with the usage of OpenAPI DevTools, you can instantly generate an OpenAPI 3.1 specification for any website or application. You can check out this project from its GitHub repository here and leave a star if you like it.
Lastly, we will take a look at some of the trending scoops that hold a special mention for the community.
Notable FYIs
OpenTelemetry steals the spotlight in Qcon with excellent sessions from prominent speakers of the industry. Here is a must watch session from the principal engineer of Skyscanner, Daniel Gomez Blanco where he shared his experience on leading a large scale observability initiative at Skyscanner and adopting OpenTelemetry on hundreds of services leading to effective and efficient observability.
Applications primarily expose their functionality to the outside world by means of API. Hence, protecting an API from getting modified or destroyed through unauthorised access, is one of the prime responsibilities. Here are 12 tips from ByteByteGo on how you can protect your API.
CEO of DoNotPay, Joshua Browder is helping consumers with the help of his company cancel their gym memberships, dispute charges, with an effort to stand against big corporations. He believes that in the future AIs will be fighting other AIs and stresses on the fact that companies nowadays use AI enabled chatbots to handle customer success and to tackle them, consumers need to be armed with powerful AI tools. Learn more about his perspective from this podcast hosted by Eric Newcomer from Newcomer.co.
Application release is a lengthy process and releasing a mobile application is somewhat different from releasing a web or enterprise application. This short article from ByteByteGo draws a perfect picture with a schematic diagram explaining the steps detailing how a mobile application gets released.
OpenTelemetry brought a revolution in the world of observability. The OTel maintainers are working hard in bringing consequent updates and here is a detailed article explaining the latest updates from the core repositories of OpenTelemetry.
That’s it from us in this edition. We hope you are going away with a ton of new information. Lastly, share this newsletter with your colleagues and pals if you find it valuable and a subscription to the newsletter will be awesome if you are reading for the first time.