Django 5.0: What is New in It?
Django is an open-source Python web framework. It makes the web development process fast and straightforward through its collection of modules. Since its initial release in 2005, the framework has come a long way. With every new update, it is getting more and more robust. Let’s discover what new features and updates it brings in Django 5.0. Significant Updates in Django 5.0 Released on 4 December 2023, Django 5.0 introduces numerous updates to enhance the web development experience. Some of the primary improvements in Django are as per below: Straightforward Rendering of Form Fields One of the notable improvements you can notice in Django 5.0 is that Form Fields are easy to render now. Form fields in Django have numerous elements, such as descriptive labels, help text, error labels, etc. It was always tiresome to lay out all manually. Thankfully in this new version, you don’t need to bother about it. Django 5 features field group templates. These templates simplify the rendering of all form field components, such as widgets, help text, labels, errors, and more. Earlier: <form> … <div> {{ form.name.label_tag }} {% if form.name.help_text %} <div class="helptext" id="{{ form.name.auto_id }}_helptext"> {{ form.name.help_text|safe }} </div> {% endif %} {{ form.name.errors }} {{ form.name }} <div class="row"> <div class="col"> {{ form.email.label_tag }} {% if form.email.help_text %} <div class="helptext" id="{{ form.email.auto_id }}_helptext"> {{ form.email.help_text|safe }} </div> {% endif %} {{ form.email.errors }} {{ form.email }} </div> <div class="col"> {{ form.password.label_tag }} {% if form.password.help_text %} <div class="helptext" id="{{ form.password.auto_id }}_helptext"> {{ form.password.help_text|safe }} </div> {% endif %} {{ form.password.errors }} {{ form.password }} </div> </div> </div> … </form> Now: <form> … <div> {{ form.name.as_field_group }} <div class="row"> <div class="col">{{ form.email.as_field_group }}</div> <div class="col">{{ form.password.as_field_group }}</div> </div> </div> … </form> Database Generated Model Field The database-generated model field is another prominent update you can notice in Django 5.0. The latest GeneratedField in Django lets users create database-generated columns. The good thing is that all database backends support it. It is going to be beneficial for fields computed from other fields. For example: from django.db import models from django.db.models import F class Square(models.Model): side = models.IntegerField() area = models.GeneratedField( expression=F("side") * F("side"), output_field=models.BigIntegerField(), db_persist=True, ) This function can significantly improve the efficiency of the database. Moreover, it maintains the integrity of data. Python Compatibility Django is keeping pace with the ever-evolving Python language. With Django 5.0, users can relish the latest Python features and improvements. This new version supports Python 3.10, 3.11, and 3.12. Not only does it ensure the best performance but also improves security. Now developers can relish the full potential of Django 5.0. Facet Filters in the Admin The Django 5.0 comes with facet counts for applied filters on the admin change list. Developers can toggle this feature using UI (User Interface). It improves the admin interface by presenting facet counts alongside filters. Users can now get a quick insight into the distribution of data. Write Field Choice Easily In the earlier version of Django, it was challenging to list field choices. Users had to make an inconvenient arrangement of 2-tuples or Enumeration subclasses to list the choices available to Field.choices and ChoiceField.choices objects. See the following example: HQ_LOCATIONS = [ ("United States", [("nyc", "New York"), ("la", "Los Angeles")]), ("Japan", [("tokyo", "Tokyo"), ("osaka", "Osaka")]), ("virtual", "Anywhere"), ] Nevertheless, this latest version lets you use concise declarations with the help of dictionary mappings: HQ_LOCATIONS = { "United States": {"nyc": "New York", "la": "Los Angeles"}, "Japan": {"tokyo": "Tokyo", "osaka": "Osaka"}, "virtual": "Anywhere", } It simplifies choices to encode as literals. AsyncClient Django 5.0 features additional asynchronous methods to the Client as well as AsyncClient. It supports asynchronous testing of Django applications. Users can now create tests that replicate the asynchronous behavior of the application. Database-Computed Default Values Django 5.0 lets you define database-computed default values. It means you get more powerful and accurate default settings. The new `Field.db_default` parameter enables users to set database-computed default values for model fields quickly. It is specifically helpful for time stamps or calculated fields. Although it is a minor change, it will have a substantial impact on the integrity of your data. Users can define default values using database functions. Features Deprecated in 5.0 Django 5.0 also has abolished a few old features. Therefore, you must check whether your code relies on any of them. If yes, you will need to update it accordingly. These features were depreciated in previous versions. Some notable ones include: Serialize test setting is no longer available. The undocumented django.utils.baseconv module is abolished. You can’t use undocumented django.utils.datetime_safe module anymore. The USE_TZ setting now has a default value of True. Earlier, it was false. Conclusion Django 5.0 introduces numerous updates and features that take the web development game to the next level. The platform has solidified its position as a powerful and versatile web framework. It has turned into a crucial tool for building websites and web applications. Enhanced flexibility in declaring field choices, improved performance, and numerous security features make it one of the best Python web frameworks. I’ve been working with Django since version 0.96 (2007), so if you need help with it, Contact Now
Parcel Bundler The Ultimate Guide for Beginners
The web development landscape has been continuously progressing. Today, it has become easier to optimize the performance and efficiency of a web project – thanks to bundling tools. These platforms boost productivity and save the headache of setting up and configuring different web tools. While numerous bundling tools have emerged recently, one renowned is the Parcel Bundler. This post explores the different features of Parcel Bundler. Before we get to know its features, let’s learn more about it. Parcel Bundler Overview Parcel Bundler is an advanced tool that helps web developers utilize bundle web resources. The bundler supports zero configuration. It means it does not need any configuration file to bundle web applications. Parcel Bundler is an open-source tool that supports various languages and file types. It can integrate multiple files into a single file. Parcel Bundler can bundle files like HTML, CSS, and JavaScript into a format, optimized for the web. Furthermore, it lets you optimize your codes and prepare web projects for deployment. Some well-known features of Parcel Bundler are as per below: Features of Parcel Bundler Zero Config Module Bundler Parcel Bundler supports a zero-config setup. It means developers can bundle their web applications without configuring the bundling processes. It eliminates the need for interpreting configuration files. Hot Module Replacement HMR or Hot Module Replacement is an advanced feature of Parcel Bundler. It lets developers update their web codes in real time without reloading the full page. As web developers make changes to their codes, Parcel rebuilds the changed files and updates their applications in the browser. Parcel Bundler’s HMR updates modules in the browser at runtime without refreshing the entire page. As a result, web developers retain their application while making small changes in their codes. Bundling Parcel Bundler enables users to keep all their project files together. It can bundle JavaScript, CSS, and other files together. As Parcel automatically examines the requirements of your projects, it produces optimized bundles accordingly. File Compression Parcel Bundler performs a wide range of optimizations when creating the production build. File compression is one of them. The bundler minimizes the size of files by altering their variable names. Code Minification Parcel bundler has a built-in feature for code minification. It eliminates unnecessary characters, such as spaces, comments, etc.., from web codes without influencing their functionality. Code Minification improves the performance of your web application by reducing the overall loading time. Minification starts naturally when you start your project using the production mode (Parcel Build Command) parcel build index.html The command indicates the Parcel to bundle your project specified in the index.html file. Image Optimization Parcel bundler also excels at handling image optimization. It minimizes the size of images without affecting their quality. Therefore, websites and applications load faster. There are various ways Parcel Bundler optimizes images. For example, it adjusts the compression settings of PNG and JPEG files. Moreover, it may convert the format of images. It also resizes the dimensions of images. Development Caching Parcel Bundler caches certain resources during the development to avoid reloading those files while making changes. It speeds up the building process by updating and recompiling the parts of a web application that have been changed. Development caching is an exceptionally helpful feature for large projects. Code Cleanup The parcel comes with a built-in feature to eliminate unnecessary notes. While building a website or application, we put some notes for ourselves. For instance, we write console.log in the code. Parcel removes such statements from code automatically. As a result, your codebase looks neat and clean. Tree Shaking Tree Shaking is another crucial feature of Parcel Bundler. It lets the user remove unused codes, known as dead codes, from the final bundle. The term ‘tree shaking’ gets inspiration from the idea of shaking a tree to eliminate dead leaves. Tree shaking automatically identifies the unused codes and removes them. It works perfectly with ES6 module syntax (import/export). Tree Shaking supports the static identification of imports and exports. It makes it easier to determine unused codes. To eliminate all the dead codes, tree shaking analyzes the whole dependency tree right from the entry point of an application. It traces functions, variables, or imports used and removes the rest during the bundling process. Browser Compatibility Parcel Bundler provides a smooth development experience – thanks to its browser compatibility. The tool makes sure that you get compatibility across different browsers. Below is how Parcel ensures browser compatibility Parcel integrates with Babel, and transpiles JavaScript code (ES6+ syntax) into a backward-compatible version. Consequently, it works with a diverse range of browsers. It can work with older browsers that do not support JavaScript. Installation of Parcel If you have Node.js and npm installed, you can install Parcel Bundler using the following command. // Installing Parcel Bundler globally npm install -g parcel-bundler Installing parcel globally helps you utilize the parcel command in any project folder. Conclusion Parcel Bundler is a trustworthy and efficient service for bundling web applications. Its features like zero-configuration, caching, and tree shaking give it an edge over its competitors. No matter if you are a beginner or an experienced web developer, you can leverage this technology to improve your productivity and efficiency. So what are you waiting for? Boost your web development workflow with this excellent bundling tool.
Ceph Persistent Storage for Kubernetes with Cephfs
Kubernetes is a prominent open-source orchestration platform. Individuals use it to deploy, manage, and scale applications. It is often challenging to manage stateful applications on this platform, especially those having heavy databases. Ceph is a robust distributed storage system that comes to the rescue. This open-source storage platform is known for its reliability, performance, and scalability. This blog post guides you on how to use Ceph persistent storage for Kubernetes with Cephfs. So let us learn the process step-by-step. Before we jump into the steps, you must have an external Ceph cluster. We assume you have a Ceph storage cluster deployed with Ceph Deploy or manually. Step 1: Deployment of Cephfs Provisioner on Kubernetes Deployment of Cephfs Provisioner on Kubernetes is a straightforward process. Simply log into your Kubernetes cluster and make a manifest file to deploy the RBD provisioner. It is an external dynamic provisioner that is compatible with Kubernetes 1.5+. vim cephfs-provisioner.yml Include the following content within the file. Remember, our deployment relies on RBAC (Role-Based Access Control). Therefore, we will establish the cluster role and bindings before making the service account and deploying the Cephs provisioner. — kind: Namespace apiVersion: v1 metadata: name: cephfs — kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner namespace: cephfs rules: – apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] – apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] – apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] – apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] – apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] — kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner namespace: cephfs subjects: – kind: ServiceAccount name: cephfs-provisioner namespace: cephfs roleRef: kind: ClusterRole name: cephfs-provisioner apiGroup: rbac.authorization.k8s.io — apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cephfs-provisioner namespace: cephfs rules: – apiGroups: [""] resources: ["secrets"] verbs: ["create", "get", "delete"] – apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] — apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cephfs-provisioner namespace: cephfs roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cephfs-provisioner subjects: – kind: ServiceAccount name: cephfs-provisioner — apiVersion: v1 kind: ServiceAccount metadata: name: cephfs-provisioner namespace: cephfs — apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-provisioner namespace: cephfs spec: replicas: 1 selector: matchLabels: app: cephfs-provisioner strategy: type: Recreate template: metadata: labels: app: cephfs-provisioner spec: containers: – name: cephfs-provisioner image: "quay.io/external_storage/cephfs-provisioner:latest" env: – name: PROVISIONER_NAME value: ceph.com/cephfs – name: PROVISIONER_SECRET_NAMESPACE value: cephfs command: – "/usr/local/bin/cephfs-provisioner" args: – "-id=cephfs-provisioner-1" serviceAccount: cephfs-provisioner Next, apply the manifest. $ kubectl apply -f cephfs-provisioner.yml namespace/cephfs created clusterrole.rbac.authorization.k8s.io/cephfs-provisioner created clusterrolebinding.rbac.authorization.k8s.io/cephfs-provisioner created role.rbac.authorization.k8s.io/cephfs-provisioner created rolebinding.rbac.authorization.k8s.io/cephfs-provisioner created serviceaccount/cephfs-provisioner created deployment.apps/cephfs-provisioner created Make sure that the Cephfs volume provisioner pod is in the operational state. $ kubectl get pods -l app=cephfs-provisioner -n cephfs NAME READY STATUS RESTARTS AGE cephfs-provisioner-7b77478cb8-7nnxs 1/1 Running 0 84s Step 2: Obtain the Ceph Admin Key and Create a Secret on Kubernetes Access your Ceph cluster and retrieve the admin key to be used by the RBD provisioner. sudo ceph auth get-key client.admin Save the value of the admin user key displayed by the above command. Later, we will incorporate this key as a secret in Kubernetes. kubectl create secret generic ceph-admin-secret \ –from-literal=key='<key-value>' \ –namespace=cephfs Where <key-value> is your Ceph admin key. Verify the creation by using the following command. $ kubectl get secrets ceph-admin-secret -n cephfs NAME TYPE DATA AGE ceph-admin-secret Opaque 1 6s Step 3: Make Ceph Pools for Kubernetes and Client Key To run a Ceph file system, you will need at least two RADOS pools, one for data and another for metadata. Usually, the metadata pool contains only a few gigabytes of data. Generally, individuals use 64 or 128 for large clusters. Therefore, we recommend a small PG count. Now let us make Ceph OSD pools for Kubernetes: sudo ceph osd pool create cephfs_data 128 128 sudo ceph osd pool create cephfs_metadata 64 64 Create a Ceph file system on the pools. sudo ceph fs new cephfs cephfs_metadata cephfs_data Confirm Ceph File System Creation. $ sudo ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] UI Dashboard Confirmation Step 4: Make Cephfs Storage Class on Kubernetes A StorageClass serves as a means to define the “classes” of storage you offer in Kubernetes. Let’s create a storage class known as “Cephrfs.” vim cephfs-sc.yml Add the following content to the file: — kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: cephfs namespace: cephfs provisioner: ceph.com/cephfs parameters: monitors: 10.10.10.11:6789,10.10.10.12:6789,10.10.10.13:6789 adminId: admin adminSecretName: ceph-admin-secret adminSecretNamespace: cephfs claimRoot: /pvc-volumes Where: ⦁ Cephfs is the name of the StorageClass to be created. ⦁ 10.10.10.11, 10.10.10.12 & 10.10.10.13 are the IP addresses of Ceph Monitors. You can list them with the command: $ sudo ceph -s cluster: id: 7795990b-7c8c-43f4-b648-d284ef2a0aba health: HEALTH_OK services: mon: 3 daemons, quorum cephmon01,cephmon02,cephmon03 (age 32h) mgr: cephmon01(active, since 30h), standbys: cephmon02 mds: cephfs:1 {0=cephmon01=up:active} 1 up:standby osd: 9 osds: 9 up (since 32h), 9 in (since 32h) rgw: 3 daemons active (cephmon01, cephmon02, cephmon03) data: pools: 8 pools, 618 pgs objects: 250 objects, 76 KiB usage: 9.6 GiB used, 2.6 TiB / 2.6 TiB avail pgs: 618 active+clean Once you have updated the file with the accurate value of Ceph monitors, give the Kubectl command to make the StorageClass. $ kubectl apply -f cephfs-sc.yml storageclass.storage.k8s.io/cephfs created Next, list all the available storage classes: $ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ceph-rbd ceph.com/rbd Delete Immediate false 25h cephfs ceph.com/cephfs Delete Immediate false 2m23s Step 5: Do Testing and Create Pod Create a test persistent volume claim to ensure that everything is smooth. $ vim cephfs-claim.yml — kind: PersistentVolumeClaim apiVersion: v1 metadata: name: cephfs-claim1 spec: accessModes: – ReadWriteOnce storageClassName: cephfs resources: requests: storage: 1Gi Apply manifest file $ kubectl apply -f cephfs-claim.yml persistentvolumeclaim/cephfs-claim1 created The successful binding will show the bound status. $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ceph-rbd-claim1 Bound pvc-c6f4399d-43cf-4fc1-ba14-cc22f5c85304 1Gi RWO ceph-rbd 25h cephfs-claim1 Bound pvc-1bfa81b6-2c0b-47fa-9656-92dc52f69c52 1Gi RWO cephfs 87s Next, we can launch a test pod using the claim we made. First, create a file to store that data: vim cephfs-test-pod.yaml Add content
Types of NoSQL Databases: Everything You Need to Know About Them
NoSQL or Not Only SQL is a renowned database management system (DBMS) that manages a large volume of unstructured or semi-structured data. Since it eliminates various limitations of conventional relational databases, the NoSQL database has become popular. Google, Facebook, Amazon, and Netflix are some reputable companies that use NoSQL. This blog makes you aware of different types of NoSQL databases. In addition, you will learn their features. Before we move further, let’s find out how NoSQL is diverse from SQL. SQL vs. NoSQL Databases: Quick Comparison Type SQL databases are Relational Databases, while NoSQL databases are known as non-relational databases. Language of Query SQL databases use a Structured Query Language to do jobs like Delete, Select, Update, and Insert. On the other hand, NoSQL has its query language for manipulating data. NoSQL works on a framework or API, depending on the type of database. Expandability Traditional SQL databases are vertically scalable. You can enhance their performance by upgrading hardware. On the contrary, NoSQL databases are horizontally scalable from the ground up. Consequently, they are better at handling large amounts of data and traffic. Property Followed SQL follows ACID (Atomicity, Consistency, Isolation, and Durability) transactions when it comes to managing data integrity. NoSQL databases use the CAP theorem (Consistency, Availability, and Partition Tolerance). Types of NoSQL Databases We can categorize NoSQL databases into the following 4 types. Each has its pros and limitations. You can choose them based on your requirements. Let us learn about them in detail. Key Value Pair Database Key-Value Pair Database is one of the simplest types of NoSQL Databases. It is a non-relational database storing data elements in key-value pairs. Key-Value Pair Database can handle heavy loads of data. It stores data as a hash map and has two columns, i.e., the Key and the Value. Each database key is different, while the value can be String, Binary Large Objects, or JavaScript Object Notation. The three major features of the Key Value Pair Database are speed, straightforwardness, and scalability. Generally, this type of database is used for creating dictionaries, user profiles, user preferences, etc. Graph-Based Database The graph-based database helps users store entities and relations between those entities. Commonly, this database is used to store data on social networking websites, fraud detection systems, healthcare networks, and more. The graph-based database stores the data as a node. The connections between nodes are known as edges. Every edge and node has a different identifier. The database allows users to find the relationship between the data with the help of links. Unlike relational databases, graph-based databases are multi-relational. A few well-known graph-based databases are Flock DB, Neo4J, Infinite Graph, etc. All-in-all, we can say that a graph-based database stores, manages, and queries data as a graph structure. Column Oriented Database Column Oriented Database is a non-relational database. The database lets you store data in rows and read it row by row. It is like a collection of columns like we see in a table. Each column stores one type of information. The database reads and retrieves the data at high speed. You can run analytics on a limited number of columns to read those columns without consuming memory on unwanted data. Column Oriented Database performs queries like Count, SUM, AVG, and MIN quite quickly. Therefore, the database is used for analytics and reporting, data warehousing, and library card catalogs. Document-Oriented Database A document-oriented database is one of the prominent types of NoSQL databases. It stores and manages data like we organize documents in the real world. Although the data is stored and retrieved as a key-value pair, the value is stored as a document. The database uses the JSON, XML, or BSON documents to store the data. Users can store and retrieve documents from their networks in a form that is closer to the data objects. Therefore, negligible translation is needed to access and use data in an application. Document-Oriented Database supports flexible schema, scalability, and quick retrieval. MongoDB and Couchbase are two fine examples of these databases. This database is used in CMS (Content Management Systems), E-commerce websites, gaming applications, collaboration tools, etc. So these are four types of NoSQL databases. Let’s find out why this database system is getting popular. Features of NoSQL NoSQL has several advancements over traditional databases. We have listed a few significant ones. Compatible with Multiple Data Models Like relational databases, NoSQL is not strict. It can handle multiple data models. Additionally, the database can manage structured, semi-structured, and unstructured data with the same speed. Schema Flexibility Unlike conventional database systems, Not SQL databases do not require a fixed schema. It supports relaxed schemas. NoSQL is capable of managing different data formats and structures. As it does not have a strict predefined schema, it permits changes in data models. Scalable As mentioned above, the NoSQL database is scalable. Users can scale it horizontally by adding more modes and servers. Consequently, it is suitable for websites and web applications with continuously growing data. Excellent Uptime NoSQL databases have excellent uptime. They support serverless architecture and create multiple copies of data on various nodes. Consequently, businesses manage their database smoothly with minimal downtime. If one note breaks down, another takes its place and gives access to the data copy. Examples of NoSQL Now you know the different types of NoSQL databases and their uses. Below are some examples of them. Document Database MongoDB is a well-known document-oriented database. It stores data in JSON-like documents. MongoDB is popular for its scalability and flexibility. Column Database Apache Cassandra is a well-known column-based database system that handles large amounts of data across different commodity servers. Graph Database Amazon Neptune is a managed graph database service by AWS. It can work with both RDF graph and property graph models. Key-Value Database Amazon DynamoDB is a database service that provides high uptime and low-latency key-value storage. This service from Amazon Web Service is the epitome of a Key-Value type database. Conclusion Various types of NoSQL databases are a crucial
Keras Core 3.0 — Pioneering the Next Frontier in Deep Learning APIs
In the dynamic landscape of artificial intelligence, where breakthroughs occur in rapid succession and the boundaries of what’s possible are constantly pushed, the Keras framework has emerged as a steadfast companion for machine learning practitioners and researchers. With the advent of Keras Core 3.0, the framework embarks on a transformative journey, poised to redefine the very essence of capabilities, performance, and adaptability, and solidify its position as a trailblazer in the realm of deep learning. This article delves into the evolution of Keras, highlights the remarkable features of version 3.0, and explores its compatibility with various backends. Understanding Keras — A Journey from Inception to Innovation Keras, born from the visionary mind of François Chollet in 2015, swiftly rose to prominence as a high-level neural networks API known for its intuitive design and unparalleled experimentation agility. Its initial incarnation and subsequent integration with TensorFlow marked a pivotal moment, propelling Keras into the limelight of machine learning tools. As the AI landscape evolved, Keras adapted in tandem, shaping itself to meet the diverse demands of an ever-expanding user community. Now, with the unveiling of Keras Core 3.0, this evolutionary saga culminates in a symphony of enhancements that not only elevate the framework’s capabilities but also redefine its role as an indispensable asset in the arsenal of AI practitioners. Redefining Possibilities — Unveiling Keras 3.0’s Game-Changing Features Embracing the Multi-Backend Landscape Keras 3.0 emerges as a trailblazer with its unprecedented support for multiple backends. While its roots are anchored in TensorFlow, this version casts a wider net, inviting frameworks like jAX and PyTorch into its fold. The result? A harmonious coexistence that empowers researchers and practitioners to wield their preferred framework without renouncing the prowess of Keras. Precision Perfected — Advanced Performance Optimization Keras Core 3.0 doubles down on performance optimization, seamlessly weaving techniques like mixed-precision training and distributed training into its fabric. The result is a turbocharged training process and maximized hardware resource utilization. These optimization strategies work behind the scenes, enabling users to focus on the art of model development and experimentation, confident that the framework is orchestrating the complex ballet beneath. Expanding the Horizons — A Flourishing Ecosystem The Keras ecosystem flourishes with renewed vigour in Keras 3.0. The framework’s enhanced support for KerasCV and KerasNLP, specialized libraries tailored for computer vision and natural language processing, empowers it to excel in these domains. This synergy doesn’t just streamline the development process; it equips users with an extensive toolkit to conquer the intricate challenges inherent in these fields. Uniting the Diverse — Cross-Framework Compatibility Keras Core 3.0 ushers in an era of harmony across deep learning frameworks. Models crafted in Keras effortlessly traverse the boundaries between TensorFlow, jAX, and PyTorch backends, reflecting a unification in an ecosystem historically divided. This seamless compatibility erases barriers, fostering an environment of collaboration and experimentation, where diverse tools coalesce to drive innovation. Evolution by Design — The Philosophy of Progressive Disclosure Keras 3.0 embodies the ethos of progressive disclosure, catering to both novices and seasoned practitioners. The API unfolds in a manner that facilitates the gentle onboarding of newcomers while gradually unveiling the advanced features craved by experts. This balanced approach ensures Keras remains accessible and indispensable, irrespective of users’ proficiency levels. A Stateless Symphony of Design — The Stateless API Paradigm The introduction of the stateless API marks a paradigm shift in Keras 3.0. Aligned with the trend of integrating functional programming concepts in deep learning, this design choice fosters modular architecture, encourages code reusability, and champions clean code organization. This leap not only elevates the development experience but also fortifies code maintenance and collaborative prowess. Navigating the Possibilities — Keras for TensorFlow, jAX, and PyTorch Embarking on the Voyage: Installation Embarking on the journey with Keras Core 3.0 is an effortless endeavour. Installation guides for each supported backend are readily available in the official documentation, providing users the freedom to opt for the backend that resonates with their ethos and project requisites. This adaptability cements Keras as an indispensable entity amid the ever-shifting currents of AI technology. For installation, $ pip install keras-core import keras_core as keras Aligning with the Core: Backend Configuration Configuring the backend is a seamless ritual, often requiring a mere few lines of code. This configuration determines the engine propelling Keras—be it TensorFlow, jAX, or PyTorch. This flexibility empowers users to fluidly transition between backends, paving the way for efficient exploration and experimentation. Run the following command for backend configuration: $ export KERAS_BACKEND="jax" $ python train.py Or $ KERAS_BACKEND=jax python train.py Mastery in Action: Integrating KerasCV and KerasNLP The integration of KerasCV and KerasNLP into Keras Core 3.0 paints a transformative landscape. KerasCV brings forth a symphony for computer vision tasks, providing dedicated APIs and pre-fabricated models for image classification, object detection, and segmentation. Meanwhile, KerasNLP empowers users to navigate the challenges of natural language processing with access to cutting-edge language models, tokenization tools, and sequence manipulation layers. And here is some KerasCV usage example: import keras_cv import keras_core as keras filepath = keras.utils.get_file(origin="https://i.imgur.com/gCNcJJI.jpg") image = np.array(keras.utils.load_img(filepath)) image_resized = ops.image.resize(image, (640, 640))[None, …] model = keras_cv.models.YOLOV8Detector.from_preset( "yolo_v8_m_pascalvoc", bounding_box_format="xywh", ) predictions = model.predict(image_resized) A Confluence of Innovation: In the ever-accelerating tapestry of deep learning, Keras Core 3.0 emerges as a beacon of innovation and adaptability. With its embrace of multiple backends, advanced performance optimization, amplified ecosystem, cross-framework harmony, philosophy of progressive disclosure, and the advent of the stateless API, Keras 3.0 redefines itself as the quintessential deep learning API. It resonates across the spectrum of users—novices venturing forth and experts charting the boundaries of possibility. As the grand symphony of deep learning unfolds, Keras Core 3.0 remains a steadfast companion, empowering developers to manifest their visions with unmatched finesse and precision.
How to choose the right NodeJS Framework
It’s a modern world full of real-time web applications whether we talk about online games or messengers. Well, most modern applications have Node.js under the hood offering a scalable JavaScript environment. A Node.js framework features built-in resources including routing, templating, and database connectivity. Whether you’re using an MVC, Full-Stack MVC, or REST API, all the Node.js frameworks promote efficient and productive development. So, it’s finally a question, how to choose the right Node.js framework in 2023? You asked for it and here we brought a complete guide to help you make the right decision. Before you pick a framework, let’s start with exploring the best options you have online. 6 Best Options in NodeJS Frameworks to Choose From Express.js When it comes to choosing a Node.js framework, Express.js might be the top preference of most users. That’s because of its popularity, MVC architecture, and supreme capabilities of client-server interaction. Thanks to its simple code structure, Express.js offers a great user experience making it the perfect choice for small and medium-sized web app development. Also, it becomes an ideal choice if your project demands routing, middleware, and templating support. Nest.js Nest.js is trailing Express.js as one of the most efficient Node.js frameworks in the industry. Especially if you’re looking to build complex yet efficient server-side applications, Nest.js got the drill. The credit goes to its support for object-oriented programming (OOP) and reactive programming. As it supports TypeScript and JavaScript, you can easily integrate Nest.js with Express.js and build multi-layered applications. Koa.js Developed by Express.js developers, Koa.js is a masterpiece framework with lighter interface and cascading middleware. What it does?… The cascading middleware allows you to personalize your webpage content for different users without compromising the user experience. Additionally, you get access to various plugins and libraries of Express.js with lower complexity. It means Koa.js can be a pick for you if you are more into customizations and maintainability. Feathers.js Feathers.js is another excellent option for Node.js frameworks, particularly if you’re aiming to build real-time applications. With its focus on simplicity and flexibility, Feathers.js enables you to create scalable and efficient server-side applications effortlessly. It also offers support for various databases, making it a versatile choice for handling different types of projects. If you prioritize real-time functionalities, Feathers.js might be the ideal fit for your needs. Hapi.js Hapi.js is a robust and extensible Node.js framework designed for building APIs. With a strong emphasis on configuration over code, Hapi.js makes it easier to develop well-organized and maintainable applications. It excels in providing security features and allows you to control request handling, making it suitable for large-scale projects with specific security requirements. Hapi.js is also a great option for teams that value code consistency and readability. Sails.js For developers seeking a full-featured and opinionated framework, Sails.js is worth considering. It follows the convention-over-configuration principle, which means you spend less time configuring and more time coding. Sails.js offers an integrated ORM (Object-Relational Mapping) and supports real-time updates, making it a solid choice for building data-intensive applications like chat applications or social networks. If you prefer rapid development with batteries included, Sails.js could be your framework of choice. Top 5 Tips to Choose the Right NodeJS Framework: Project Requirements When embarking on the journey of choosing a NodeJS framework, your project’s unique requirements should be the guiding light. Each application comes with its own set of challenges and goals, and understanding them is paramount. Whether you need real-time capabilities, robust data processing, or server-side rendering, tailoring your choice to match these needs ensures a harmonious development process and a successful end product. By thoroughly analyzing your project’s necessities, you can avoid the pitfalls of selecting an ill-fitting framework, saving time, effort, and resources. Easy to Learn and Use Simplicity and accessibility are virtues in the world of software development, and when it comes to choosing a NodeJS framework, ease of learning and usage become decisive factors. An intuitive framework with clear documentation, comprehensive tutorials, and a supportive community fosters a welcoming environment for developers of all levels of expertise. Learning curves can be reduced significantly, and teams can quickly adapt and become productive with a framework that offers well-designed interfaces and conventions. Smooth onboarding processes and streamlined workflows empower developers to focus on actual problem-solving rather than wrestling with complex setups and configurations. Scalability In the realm of modern web applications, scalability is the bedrock of sustainable success. When selecting a NodeJS framework, it is essential to consider its ability to scale effortlessly. A framework that can handle increased user loads and growing data volumes without sacrificing performance is an invaluable asset. Horizontal scaling, the ability to distribute the application across multiple servers, and efficient utilization of multi-core processors are crucial characteristics of a scalable framework. By choosing a framework that can grow with your application’s demands, you ensure a seamless user experience, better resource management, and future-proofing your project. Versions Upgrade The landscape of NodeJS development is ever-evolving, with new versions constantly being released, each bringing improvements, security fixes, and exciting features. When evaluating NodeJS frameworks, it is imperative to consider how well-maintained and updated they are to keep pace with the latest NodeJS releases. Additionally, it minimizes the risk of compatibility issues and security vulnerabilities. Being on the cutting edge of the NodeJS ecosystem translates to a more future-proof and efficient application, giving you a competitive advantage. Community Support The value of a strong and engaged community cannot be overstated when selecting a NodeJS framework. A vibrant community signifies the framework’s reliability, popularity, and potential for long-term viability. When you encounter challenges or have questions during the development process, a robust community can provide invaluable assistance, shared knowledge, and innovative solutions. Community support often comes in the form of online forums, chat groups, documentation contributions, and open-source collaborations. Conclusion: In conclusion, choosing the right Node.js framework for your project is a crucial decision that can significantly impact its success. The six frameworks discussed in this guide and each have their unique strengths and
Qwik Framework — Symbolizing Resumability & Serialization
An efficient JavaScript framework can build a road to success in your front-end development. Well, we live in a furiously innovative world with a variety of JavaScript Frameworks outperforming each other. But the Qwik stands out as a blazing fast yet developer-friendly framework designed to streamline your development process. Thanks to resumability and lazy loading, Qwik is 5-10 times faster than all the existing JavaScript frameworks. Meanwhile, its productive features and convenience to use craft a perfect environment for complex front-end development. Since there’s a lot to cover about Qwik, we have incorporated everything you need to know in this guide. Let’s start with understanding the framework itself and there’s a lot more waiting for you in the queue! What is Qwik? — A Solution to Developer Problems! Developed by the creator of Angular, Qwik is an open-source frontend framework known for offering super-fast page load speed and efficiency. It delivers HTML with minified JavaScript featuring the necessary elements only for an incredible performance. Thanks to its fine-grained architecture, Qwik can isolate the segments and hydrate them so that they can be used whenever required. The framework has reached a new potential with the v1.0 update offering better optimized rendering time and features like Lazy execution. Generally, developers need to incorporate a glut of JavaScript to make a website interactive. Qwik allows you to conduct the same level of development with efficient execution and trimmed JavaScript. Therefore, it gets you rid of slow loading times, network consumption, and compromised startup times. How Qwik is Overtaking Other Frameworks? Ultimate User Experience What do you expect from a framework that enables you to build a lightning-fast website? First and foremost, an amazing user experience out of the box! With JavaScript streaming, Qwik delivers digital products optimized for CWV scores regardless of the complexity of your project. Also, the framework works with Data Fetching that prevents waterfall delays and sustains the performance even on devices with unstable networks. Integrations Despite using a minified code, Qwik can still make your website highly capable with its exclusive integrations. You can write your application in a hosting provider and deploy it in various adaptors from Azure Cloudflare to Google Cloud Run. Additionally, Qwik supports UI components and libraries including QwikUI, Papanasi UI, Material UI, ChakraUI, and Radix. All this with just a command “npx quik add” and Qwik will give you access to a complete list to hunt for integrations. Interoperability Nothing can compete with Qwik when it comes to interoperability or just say communications between the devices. You have Qwik-React designed for lazy hydrating the React components to speed up your React application. The framework allows you to leverage the React ecosystem and migrate it over to Qwik for ultimate interoperability. Productive Developer Experience Not only does it ensure optimum user experience, but Qwik also unlocks a productive development environment for the developers. The framework features Directory-Based Routing and Middleware Logic ensuring convenient website creation and deployment. Moreover, its familiar JSX and unified execution model bolsters both front-end and back-end development in a single application codebase. Even if you’re looking to pin functions specifically to a server or browser, you can do it easily with “server$()”. Community of Passionate Developers Qwik is a globally connected framework with an exclusive community of developers from all around the world. The motivating and supportive community always appreciates sharing ideas and pushing the boundaries of the framework’s potential. Not to mention, the Discord community is evolving and community members are always available to answer your questions and resolve your queries. Whether it’s a bug or a general query, you can quickly reach out to the community and enjoy an unmatched development experience. Understanding Resumability & Lazy Loading Resumability: Enhancing Application Efficiency Resumability is a powerful feature allowing a program to pause its execution at a specific point and later resume from that point. Resumability enables developers to optimize resource utilization. This is particularly beneficial in scenarios where long-running operations or resource-intensive tasks are involved. With the Qwik Framework, developers can leverage resumability to create more robust and responsive applications. Qwik provides mechanisms that enable the serialization of the application state, allowing for seamless pausing and resuming of execution. This empowers developers to build applications that can handle interruptions, such as network failures or user interactions, with grace. By symbolizing resumability, the Qwik Framework ensures that developers can create applications that are not only efficient but also resilient to various disruptions. Lazy Loading: Improving Performance through On-Demand Loading Lazy loading is a technique that enhances application performance by deferring the loading of certain resources until they are needed. Instead of loading all resources upfront, lazy loading enables the on-demand loading of data, components, or modules when they are required during runtime. Qwik Framework leverages lazy loading to optimize application performance. By splitting an application into smaller, independently loadable units, Qwik allows loading of only the necessary components when needed. This approach reduces the initial load time of an application and improves its responsiveness. Additionally, lazy loading can also save bandwidth and reduce memory usage, making it particularly useful for large-scale applications or those accessed over slower network connections. Conclusion: All right! Here you have industry’s most efficient JavaScript framework that is 10x faster than its alternatives. As discussed above, Qwik can split the application into independent units that only load whenever required. Similarly, the framework can isolate the segments and hydrate them to offer blazing-fast load speed and optimize the site performance. Especially if you’re working with Qwik v1.0, you can unlock all the features that we have discussed above. Whether you are an enterprise or just working on a complex project, an experienced developer is always worthwhile. Having a professional Qwik developer at your side makes the development more productive and gets you the best out of your investment.
Angular v16 — An Ultimate Game Changer
Angular, the popular JavaScript framework, has been continuously evolving to meet the demands of modern web development. With each major release, Angular brings new features, enhancements, and optimizations. Angular v16 is no exception, and it introduces several groundbreaking changes that make it an ultimate game changer for developers. In this article, we will explore the top new inclusions and exclusions in Angular v16 and discuss how they revolutionize the development experience. So, let’s get started! What’s New in Angular v16? — Top New Inclusions and Exclusions Binding router Information to Component Inputs First and foremost, Angular v16 allows you to bind router information to component inputs to eliminate boilerplate code during development. You can access the router data including resolved router data, params, and queryParams without using ActivatedRoute. That’s because this new feature gets the router data available as input in the component itself. So, you can use those inputs to fetch the values instead of messing up with ActivatedRoute. Here’s an example: // Current approach, which would still work @Component({ … }) class SomeComponent { route = inject(ActivatedRoute); data = this.route.snapshot.data['dataKey']; params = this.route.snapshot.params['paramKey'] } //New approach @Component({ … }) class SomeComponent { @Input() dataKey: string; @Input() paramKey: string; //or @Input() set dataKey(value: string){ //react to the value }; @Input() set paramKey(value: string){ //react to the value }; } Angular Signals Angular Signals is a new feature in Angular v16 that enables the communication between components using a publish-subscribe pattern. It allows components to emit signals and subscribe to signals emitted by other components. This feature promotes loose coupling and makes it easier to build modular and reusable components. import { Component, Signal } from '@angular/core'; @component ({ selector:'app-parent', template: ' <button (click)="sendieszage()*>Send Mezzage</button>' }) export class Parentcoponent { @Signa1() messagesignal: Eventemstter<string>; sendMessage() { this.messageSignal.emit('Hello from parent!'); } } @Component({ semprater: 'app-child', template: ' <div>{{ message }} </div>' }) export class ChildComponent { constructon(@signal() messageSignal: EventEmitter<string>){ messageSignal.subscribe((message) => { this: message = message; }); } message: string; } In this example, the `ParentComponent` emits a signal when the button is clicked, and the `ChildComponent` subscribes to that signal and displays the message. RxJS Interoperability Angular v16 brings improved interoperability with RxJS, making it easier to work with observables and leverage the power of reactive programming. The new `@rxjs` package provides decorators and utilities for integrating RxJS into Angular components seamlessly. For instance, you can use the `@rxjs/debounceTime` decorator to debounce user input in an Angular component: import { Component } from '@angular/core'; import { debounce } from '@rxjs/debounceTime'; @component ({ selector:'app-example', template: ' <input (input)="handleInput($event.target.value)"/>' }) export class ExampleComponent { @debounceTime(300) handleInput(value: string){ // Perform operations with debounced value } } The `@rxjs/debounceTime` decorator automatically applies the `debounceTime` operator to the `handleInput` method, simplifying the usage of RxJS operators within Angular components. DestroyRef Managing subscriptions and resources in Angular components can be a challenge. Angular v16 introduces the `DestroyRef` interface, which simplifies the cleanup process when a component is destroyed. By implementing the `DestroyRef` interface, you can automatically unsubscribe from subscriptions and perform cleanup tasks when the component is destroyed: import { Component, OnInit, OnDestroy } from '@angular/core'; import { Subscription } from '@rxjs'; import { DestroyReg } from '@angular/destroy-ref'; @Component ({ selector:'app-example', template: ' <p>Example Component</p>' }) export class ExampleComponent implements OnInit, OnDestroy, DestroyReg { private subscription: Subscription; ngOnInit(){ this.subscription = /* … */; // Initialize subscription } ngOnDestroy(){ this.ngOnDestroy(); } } The `DestroyRef` interface ensures that the `onDestroy` method is called automatically when the component is destroyed, reducing the risk of memory leaks and resource wastage. Non-Destruction Hydration Angular v16 introduces non-destruction hydration, a feature that improves the performance of hydration during server-side rendering (SSR). In previous versions, Angular would destroy and recreate components during hydration, which could be costly in terms of performance. With non-destruction hydration, Angular now preserves the existing component instances during hydration, enhancing SSR performance. This change significantly reduces the overhead of rendering on the server and improves the overall user experience import { Component, NgModule } from '@angular/core'; import { BrowserModule, TransferState } from '@angular/platform-browser'; @Component ({ selector:'app-root', template: ' <h1>{{ title }}</h1> <p>{{ content }}</p>' }) export class AppComponent { title: string; content: string; constructor(private transferDtate: TransferState) { // Retrieve data from server-side Iendering const data = this.transferState.get<any>('pageData', {}); this.title = data.title; this.content = data.content; } } @NgModule({ declarations: [AppComponent], imports: [BrowserModule.widthServerTransition({ appId: 'my-app'})], providers: [TransferState], bootstrap: [AppComponent] }) export class AppModule {} In this example, we have an AppComponent that displays a title and content. During server-side rendering, the data is transferred using TransferState and stored in the state. When the Angular application is initialized on the client side, the AppComponent constructor retrieves the data from TransferState and assigns it to the component properties. This way, the component is hydrated without destroying and recreating it, improving the performance of server-side rendering. CSP Support for inline-styles Content Security Policy (CSP) is an important security mechanism to protect web applications against cross-site scripting (XSS) attacks. In Angular v16, inline styles are now compatible with CSP restrictions. This means that you can safely use inline styles within your Angular templates while adhering to CSP rules. The Angular compiler in v16 generates the necessary hashes for inline styles, ensuring they are safely executed within the CSP policy. This change provides more flexibility for developers to style their components without compromising security. Exclusion of ngcc In previous versions of Angular, the ngcc (Angular Compatibility Compiler) was used to compile and transform third-party libraries to be compatible with the Angular Ivy compiler. However, Angular v16 removes the need for ngcc altogether. With the advancements in Ivy and the ecosystem’s migration to Ivy-compatible libraries, ngcc is no longer required. This simplifies the build process and improves the overall build performance, making Angular projects faster to compile and deploy. Esbuild dev server Angular v16 introduces a new dev server powered by Esbuild, a fast JavaScript bundler. The Esbuild dev server significantly reduces the startup time of Angular applications during development. It achieves this by leveraging the speed
Django vs Flask — Which Python Framework is Perfect for Your Web Development Process?
When it comes to web development in Python, two prominent frameworks stand out: Django and Flask. These frameworks offer developers a robust foundation to build powerful web applications efficiently. Based on Model-View-Controller (MVC) architectural pattern, Django is favored for large-scale, complex projects. On the other hand, Flask is a microframework offering a lightweight and flexible approach, empowering developers to have greater control over the application structure. Both platforms have exclusive capabilities and drawbacks complicating the decision-making. In this article, we’ll delve into the technical aspects and industrial attributes of Django and Flask to help you make an informed decision for your web development endeavors. So, let’s get started! Django — Self-Sufficient Web Framework From the house of the Django Software Foundation, Django is a robust and scalable web framework known for its “batteries-included” philosophy. With built-in features and packages, Django promotes rapid development by minimizing the need for external dependencies. Its core components include an Object-Relational Mapping (ORM) layer, a template engine, form handling, authentication, and authorization. Django’s ORM simplifies database interactions, allowing seamless integration with various database systems. The framework follows the Model-View-Controller (MVC) architectural pattern, providing a clear separation of concerns. Additionally, the admin interface offers an out-of-the-box solution for managing application data, making it a popular choice for content-heavy websites. Flask — Minimalistic Microframework Flask is a lightweight and flexible micro-framework designed for simplicity and minimalism. Developed by Armin Ronacher, it provides a solid foundation for web development, offering developers greater control over the application structure. It follows a “micro” philosophy, providing essential tools and leaving the choice of additional libraries to the developers. Furthermore, the framework leverages the Werkzeug toolkit for handling routing and the Jinja2 template engine for rendering dynamic content. Its flexibility and scalability make Flask an excellent choice for small to medium-sized projects, RESTful APIs, and microservices. In addition to its features, the active community and extensive documentation ensure continuous support and updates, contributing to its widespread adoption. Comparison of Django and Flask Based on Industrial Attributes Development Capabilities Django’s batteries-included approach provides a wide array of built-in features, making development faster and more efficient. Its robust ORM simplifies database interactions, while the template engine streamlines UI development. Besides, Flask offers greater flexibility allowing developers to choose and integrate only the necessary components. This makes Flask ideal for lightweight and highly customizable applications. So, Django’s extensive feature set makes it better suited for complex projects that require rapid development and adherence to best practices. If you’re working on smaller projects that require fine-grained control over the application structure, Flask can be a great choice. Scalability Django’s scalability is what makes it a perfect choice for large-scale applications. With its ability to handle heavy workloads, Django’s robust architecture and efficient request handling ensure optimal performance. On the other hand, Flask is inherently scalable, allowing developers to add or remove components as needed. It features a modular design and customizable nature that enables developers to optimize performance for specific use cases. Architecture As discussed above, Django follows the Model-View-Controller (MVC) architectural pattern. This promotes code organization and maintainability, making it easier for multiple developers to collaborate on a project. By default, Flask works with the MVT pattern and offers a similar structure but with a more flexible design. Developers have more freedom to choose how to structure their projects and interact with components. Components and Reutilization Django is famous for its comprehensive set of built-in components, such as the ORM, template engine, and authentication system. As it reduces external dependencies, this promotes code reusability and reduces development time. While Flask provides greater flexibility, it still requires developers to rely on external packages for specific functionality. Flask’s modular design facilitates component reusability, enabling developers to build custom solutions tailored to their project requirements. Community and Support Django boasts a large and active community, with numerous contributors and a wealth of resources available. The community-driven nature of Django ensures continuous development, frequent updates, and comprehensive documentation. This support system provides assistance, encourages best practices, and addresses issues promptly. Flask also enjoys an active community, although smaller in comparison to Django. However, Flask’s community thrives on its simplicity and flexibility, offering extensive documentation and a range of community-contributed extensions. While Django’s larger community offers broader support, Flask’s community provides a close-knit environment for developers seeking minimalistic solutions. Establishment and Updates With its long history, Django has established itself as a mature and stable framework, trusted by many large-scale projects and enterprises. Its consistent updates, bug fixes, and security patches ensure reliability and compatibility with the latest technologies. Despite being a younger framework, Flask has also gained substantial popularity and has seen regular updates, although at a relatively smaller scale. Flask’s updates focus on maintaining stability and introducing new features based on community feedback. Testing Django provides a robust testing framework as part of its core, enabling developers to write comprehensive tests for their applications. Its testing utilities simplify unit testing, integration testing, and user interaction testing. Flask, being a microframework, does not include a built-in testing framework. However, Flask integrates seamlessly with popular Python testing libraries such as pytest and unittest, offering flexibility in choosing the desired testing approach. Both frameworks promote test-driven development and provide the necessary tools and extensions for efficient and thorough testing. End of the Line In conclusion, the choice between Django and Flask ultimately depends on the specific requirements and goals of your web development project. Django’s batteries-included approach, mature ecosystem, and adherence to MVC architecture make it an excellent choice for large-scale, complex applications On the other hand, Flask’s lightweight and flexible nature, coupled with its simplicity and customizability, make it ideal for smaller projects, RESTful APIs, and microservices. It empowers developers to have fine-grained control over the application structure and offers the freedom to choose and integrate only the necessary components. Consider your project’s scale, complexity, customization needs, and community support when making your decision, ensuring the best fit for your web development process.
How the New TypeScript 4.9+ Streamlines the Type Safety in Storybook 7.0?
TypeScript is popularly used for JavaScript extension and data specification in the industry. As it reports unmatched types, writing in TypeScript can ensure productivity and optimal developer experience while coding. If we are onto using TypeScript, Storybook 7.0 is worth mentioning. That’s because it allows you to write in TypeScript without any configuration setup and boost your experience with built-in APIs. With the launch of Storybook 7.0, the tool rectified all the pain points in the previous version. Now, you get a combination of CSF 3 and the new TypeScript 4.9+ operator to bolster accuracy and safety. Read on to explore how this combination can make your coding more productive and safer while using the latest Storybook 7.0. Storybook 7.0 — An Introduction to Update 8/18 Storybook 7.0 or update 8/18 is a core upgrade of Storybook focused on better interaction testing and user experience. You get 3.5% more screen space for Canvas with over 196 icons available for complete customizations. By the same token, Storybook 7.0 is more compatible in integrating with Remix, Qwik, and SolidJS. The also features some documentation upgrades such as MDX 2 and simplified import of stories. Lastly, the most exclusive upgrade is the combination of TypeScript 4.9+ and CSF3. Let’s see what makes it a big highlight! What is TypeScript 4.9+? TypeScript 4.9+ is a statically-typed superset of JavaScript that provides additional features such as type annotations, interfaces, and generics. It enables developers to write more maintainable and scalable code by catching potential errors at compile-time rather than runtime. One of the key benefits of using TypeScript with Storybook 7.0 is that it allows developers to specify the expected types of props and events for each component. This ensures that any components that use these props and events are properly typed and provide a clear contract for how they should be used. In addition to these benefits, TypeScript can also improve the documentation and discoverability of UI components in Storybook 7.0. By leveraging TypeScript’s support for JSDoc annotations, developers can document the expected usage of each component and generate API documentation automatically. Look at the difference between TypeScript types in Storybook 6 and Storybook 7.0 below. Combination of CSF3 Syntax and TypeScript 4.9+ In addition to TypeScript 4.9+, the Component Story Format (CSF) also received an upgrade from version CSF 2 to CSF3. The combination of both offer enhanced type safety, better in-editor type checking, and Codemod for easy upgrades. Here are the top elements of this incredible combination! StoryObj Type With the upgrade CSF3, you now get access to the StoryObj type that manipulates stories as objects and infers the type of component props. The feature was still there but the previous story was not so powerful to automatically infer prop types. On the other hand, this new syntax depreciates React-Specific ComponentMeta and ComponentStory using React, Vue, Svelte, and Angular. Check the result with a side-by-side comparison of CSF2 and CSF3 given below satisfies Operator satisfies Operator is the most useful feature of TypeScript 4.9+ for strict type checking. Pair CSF3 with satisfies operator to better type safety and fix unspecified/specified issues. Take a look at the below example where TypeScript is not raising any issue for unspecified label arg. If you use satisfies operator, you can fix that issue as we did below. // Button.stories.tsx import type { Meta, StoryObj } from '@storybook/react'; import { Button } from './Button'; const meta = { title: 'Example/Button', component: Button, } satisfies Meta<typeof Button>; export default meta; type Story = StoryObj<typeof meta>; export const Primary: Story = { args: { primary: true, }, }; After fixing the issue, you can expect TypeScript to provide an error for unspecified arg. Auto-infer Component Level args Pairing the CSF and TypeScript is good but it will not infer the types automatically unless you specify the connection. In this scenario, TypeScript will show errors on the stories even if you have provided a label in meta-level args. That’s where auto-infer component level args come up to specify the connection between CSF and TypeScript. To make them understand the connection, you need to pass the typeof meta to StoryObj at both story and meta-level. Here’s how you can do it! // Button.stories.tsx import type { Meta, StoryObj } from '@storybook/react'; import { Button } from './Button'; const meta = { title: 'Example/Button', component: Button, args: { label: 'Default', }, } satisfies Meta<typeof Button>; export default meta; type Story = StoryObj<typeof meta>; // 👇 TS won't complain about the "label" missing export const Primary: Story = { args: { primary: true, }, }; const Secondary: Story = { args: { disabled: false } }; const Disabled: Story = { args: { disabled: true } }; Vue As discussed above, Storybook 7.0 is more compatible with modern frameworks. Vue is the best example of that but you need to set up an ideal environment. Look for SFC files with vue-tsc and access the editor support in VSCode. Take a look at the Vue3 single file component. <script setup lang="ts"> defineProps<{ count: number, disabled: boolean }>() const emit = defineEmits<{ (e: 'increaseBy', amount: number): void; (e: 'decreaseBy', amount: number): void; }>(); </script> <template> <div class="card"> {{ count }} <button @click="emit('increaseBy', 1)" :disabled='disabled'> Increase by 1 </button> <button @click="$emit('decreaseBy', 1)" :disabled='disabled'> Decrease by 1 </button> </div> </template> Svelte Like Vue, Svelte features excellent support for TypeScript and enables .svelte files. You can utilize svelte-check and add VSCode editor support to run type checks. Consider the following component as an example. <script lang="ts"> import { createEventDispatcher } from 'svelte'; export let count: number; export let disabled: boolean; const dispatch = createEventDispatcher(); </script> <div class="card"> {count} <button on:click={() => dispatch('increaseBy', 1)} {disabled}> Increase by 1 </button> <button on:click={() => dispatch('decreaseBy', 1)} {disabled}> Decrease by 1 </button> </div> Conclusion: Conclusively, the new TypeScript 4.9+ features in Storybook 7.0 have greatly improved the type safety and developer experience for creating UI components. With the addition of advanced type inference capabilities, developers can now create reusable components with