Git Branching Strategies for the Next Level Web Development
Branching strategies in Git have become crucial tools to enhance the web development experience. These approaches enable web developers to collaborate on a project while maintaining multiple versions of the primary codebase. In short, a team can work on new features independently without affecting the main codebase. A well-constructed Git strategy improves project collaboration and code deployment speeds several times. Best Git Branching Strategies In this post, we explore some top Git branching strategies, along with their pros and cons. So without any delay, let’s get started. Trunk Based Development (TBD) Trunk-based development or TBD is a version control management approach. The strategy supports a single shared branch known as the ‘trunk’. It allows developers to merge frequent updates to a core trunk (main branch). TBD lets developers create short-lived branches with a handful of small commits. Trunk-based development improves organizational performance by reformatting integration phases and merging. As a result, it helps programmers achieve continuous integration and continuous delivery. Pros TBD supports frequent integration of code changes into the main branch. As it supports automated testing, bugs are identified quickly. It provides better team collaboration. Cons Short-lived branches might hinder the ability to isolate complex features. TBD is reliant on CI/CD practices to maintain stability. Git-Flow Git-Flow is another renowned branching model for Git. It has various branch responsibilities. For example, main/master for the production, feature for new features, a hotfix for urgent bug fixes in production, develop for active development, and more. Introduced by Vincent Driessen, this branching model provides a structured approach to organizing branches. Usually, the Git-Flow strategy includes the following branches. Master This branch contains production-ready codes. Develop As the name indicates, the development branch supports ongoing development and integration of new features. Git-flow features auxiliary branches that facilitate different stages of development and release management. Pros Git Flow is suitable for a large team. Users can manage multiple product versions impeccably. It provides clear responsibility for each branch. Users can navigate to different production versions via tags. Cons Due to multiple branches, some users might find it complex. Since the branch supports multi-step processes, development, and release frequency may become slow. GitHub Flow GitHub Flow is one of the lightweight and straightforward Git branching strategies. Suitable for limited-sized teams and projects, the strategy supports quick bug-fixing. Furthermore, it has numerous other exciting features. GitHub Flow is used with GitHub, which is a popular hosting service for Git repositories. It keeps the main branch deployable all the time. The workflow of GitHub Flow utilizes two branches. These include: Main The GitHub Flow workflow starts with the main branch containing the newest stable code that is ready for release. Feature Web developers can create feature branches from the main branch. These branches allow them to work on new features. The feature branch gets merged into the main branch after the completion of the feature. As soon as the merge is complete, developers remove the feature branch. It helps them keep the repository organized and clean. Pros GitHub Flow is easy to learn and use. Even beginners can employ it. It supports quick feedback loops by enabling developers to merge their changes into the main branch before releasing them. GitHub Flow is flexible enough to meet the requirements of different teams. Teams can utilize feature flags to manage new features released to production. Cons GitHub Flow might be challenging to manage in complex projects. Coordinating merges and keeping the main branch stable can be difficult. GitLab Flow If you are looking for a powerful Git branching strategy that can be scaled as per the requirements, GitLab Flow is an ideal choice. The branching strategy is designed for developers who use GitLab repository manager. It makes the development process straightforward by focusing on a single, protected branch, usually the main branch. The GitLab Flow workflow has the following four branches. Main This primary branch comprises the latest stable code ready for release. Develop The GitLab Flow’s workflow begins with the develop branch. This branch holds bug fixes and new features. Features The branch allows developers to form feature branches straight from the develop branch to work on new features. Release Before every new release, a release branch is formed using the develop branch. This branch helps users stage the new features and fix bugs for the release. Later on, developers merge the release branch into the develop and main branches. Pros GitLab Flow is powerful and easy to scale. You get a clear code separation from production-ready code. For each feature, you get an independent feature development. Separate branches let users work together on different features. Cons Merging the feature branches into the develop branch might cause conflicts sometimes. Beginners might find this strategy a bit complicated. Conclusion So these are some of the best Git branching strategies. We explained their uses along with their strengths and weaknesses. Now you can make an informed decision. Choose the one that better aligns with your release approach, project type, collaboration requirement, and team size.
Why Consider Hiring a Web Development Consultant in 2024
Are you planning to establish an online presence in 2024? Great going! Gaining success online is not a matter of chance. It requires proper strategies and implementation of relevant technologies. In the ever-evolving landscape of technology, you cannot rely on conventional and fixed approaches to grow. You need a web development consultant who can guide you with their expertise and experience. This post explains why one should consider hiring web development consultants. Without any further delay, let us get started. Web Development Consultancy Overview A web development consultant is an expert who gives advice and support in planning, developing, and maintaining websites and applications. The consultant helps you throughout the journey of making a concept into reality. They not only streamline the web development process but also improve the visibility of your website. In short, web consultants enable you to reach your target audience. Let us understand what exactly a web development consultant does. What Services do Web Development Consultants Offer? Generally, a web development consultancy assists businesses in creating, improving, and maintaining their web presence. Depending on your requirements, they create a strategy to develop your website from scratch or revamp the current one. Individuals can seek consultancy for website development, E-commerce websites, CMS (Content Management Systems), software development, and more. In addition to this, they also help you with website design, performance optimization, SEO, website security, and maintenance. If we discuss the process for web development consultancy, it goes like this: Initial Consultation The initial consultation involves meeting with a client and understanding their business goals. In this process, a consultant tries to identify your target audience, scope of work, and budget. Planning of Project After the initial discussion, the consultant does comprehensive project planning based on objectives. It includes making project timelines and milestones. After that, the expert determines the framework, tools, and expertise required. Designing Phase In this phase, web development consultants guide you in developing the structure and layout of the website. They give expert advice on design elements like UI and UX. They can also prepare website navigation paths and UX content strategies. Development An experienced consultant chooses the most suitable website development platform for your website. They also mentor you for the integration of 3rd party tools. Maintenance Web development consultants offer ongoing support for your website maintenance. Post-website launch, they advise on necessary improvements and security updates. You can also discuss future improvements and additions to your online business. Advantages of Hiring Website Development Consultants What is the benefit of hiring a web development consultant? Several people ask this question. They think that hiring website developers and designers is adequate to succeed online. Remember, a successful online business requires a holistic approach. The web development consultancy provides a broader perspective on online business. Some benefits you can acquire from hiring a web development consultant include: Expert Advice Taking business online is not child’s play. You must have a deep understanding of the web development world. Website development consultants have years of experience in the industry and are aware of trends. Consequently, they help you understand things better and make smarter business decisions. Customized Solutions Every online business is unique in itself. You cannot rely on the strategies that worked for someone. Consultants provide you with tailored solutions for your unique businesses. They help you create a website that aligns perfectly with your business goals. Web consultants possess the knowledge and skills required for different areas of website development. Cost-Effective Some newbies may find hiring a web development consultant an additional expense. Nevertheless, it has numerous long-term benefits that outweigh its initial cost. A web consultant optimizes your website for better conversion, which naturally increases your revenue. Besides this, they optimize website design and layout that enhance the experience of your end users. They also ensure your website is optimized for search engines. These small things increase traffic to your website and ultimately your business grows. Let You Focus on Your Core Business Web development consultants take care of the overall growth of your online business. They plan strategies for your website and help you execute the right action at the right moment. You do not need to worry about what trend is going on and what tools you need to use. The consultant will guide you on everything. It allows your internal team to focus on their core competencies. Do not Wait for Years to Get Results If you are a beginner, learning things and growing online can take years. However, when you work with a web development consultant, you get faster results. They have tried and tested strategies to succeed online. Web development consultants have tools and experience that can help you get faster results. It saves your time as well, which is a valuable resource in online business. Knowledge Transfer Hiring consultants also transfer knowledge to your internal team. You and your team can learn from their experience and expertise. They provide you with knowledge that is useful for your future projects. Final Words: Should I Hire a Web Development Consultant in 2024? There are more than 200 factors that determine how your website will perform in search engines. Website speed, mobile responsiveness, search optimization, and various aspects of UI and UX are things you cannot compromise on. If you want to take your online business to the next level, you need to work on all these. And an experienced consultant can help you with that. Do not miss out on the opportunities. Partner with Almas to get professional web development consultancy. We will elevate your online presence and make sure your website aligns with the latest industry standards. So what are you waiting for? Consult with us and stay ahead of your competitors in 2024.
Parcel Bundler The Ultimate Guide for Beginners
The web development landscape has been continuously progressing. Today, it has become easier to optimize the performance and efficiency of a web project – thanks to bundling tools. These platforms boost productivity and save the headache of setting up and configuring different web tools. While numerous bundling tools have emerged recently, one renowned is the Parcel Bundler. This post explores the different features of Parcel Bundler. Before we get to know its features, let’s learn more about it. Parcel Bundler Overview Parcel Bundler is an advanced tool that helps web developers utilize bundle web resources. The bundler supports zero configuration. It means it does not need any configuration file to bundle web applications. Parcel Bundler is an open-source tool that supports various languages and file types. It can integrate multiple files into a single file. Parcel Bundler can bundle files like HTML, CSS, and JavaScript into a format, optimized for the web. Furthermore, it lets you optimize your codes and prepare web projects for deployment. Some well-known features of Parcel Bundler are as per below: Features of Parcel Bundler Zero Config Module Bundler Parcel Bundler supports a zero-config setup. It means developers can bundle their web applications without configuring the bundling processes. It eliminates the need for interpreting configuration files. Hot Module Replacement HMR or Hot Module Replacement is an advanced feature of Parcel Bundler. It lets developers update their web codes in real time without reloading the full page. As web developers make changes to their codes, Parcel rebuilds the changed files and updates their applications in the browser. Parcel Bundler’s HMR updates modules in the browser at runtime without refreshing the entire page. As a result, web developers retain their application while making small changes in their codes. Bundling Parcel Bundler enables users to keep all their project files together. It can bundle JavaScript, CSS, and other files together. As Parcel automatically examines the requirements of your projects, it produces optimized bundles accordingly. File Compression Parcel Bundler performs a wide range of optimizations when creating the production build. File compression is one of them. The bundler minimizes the size of files by altering their variable names. Code Minification Parcel bundler has a built-in feature for code minification. It eliminates unnecessary characters, such as spaces, comments, etc.., from web codes without influencing their functionality. Code Minification improves the performance of your web application by reducing the overall loading time. Minification starts naturally when you start your project using the production mode (Parcel Build Command) parcel build index.html The command indicates the Parcel to bundle your project specified in the index.html file. Image Optimization Parcel bundler also excels at handling image optimization. It minimizes the size of images without affecting their quality. Therefore, websites and applications load faster. There are various ways Parcel Bundler optimizes images. For example, it adjusts the compression settings of PNG and JPEG files. Moreover, it may convert the format of images. It also resizes the dimensions of images. Development Caching Parcel Bundler caches certain resources during the development to avoid reloading those files while making changes. It speeds up the building process by updating and recompiling the parts of a web application that have been changed. Development caching is an exceptionally helpful feature for large projects. Code Cleanup The parcel comes with a built-in feature to eliminate unnecessary notes. While building a website or application, we put some notes for ourselves. For instance, we write console.log in the code. Parcel removes such statements from code automatically. As a result, your codebase looks neat and clean. Tree Shaking Tree Shaking is another crucial feature of Parcel Bundler. It lets the user remove unused codes, known as dead codes, from the final bundle. The term ‘tree shaking’ gets inspiration from the idea of shaking a tree to eliminate dead leaves. Tree shaking automatically identifies the unused codes and removes them. It works perfectly with ES6 module syntax (import/export). Tree Shaking supports the static identification of imports and exports. It makes it easier to determine unused codes. To eliminate all the dead codes, tree shaking analyzes the whole dependency tree right from the entry point of an application. It traces functions, variables, or imports used and removes the rest during the bundling process. Browser Compatibility Parcel Bundler provides a smooth development experience – thanks to its browser compatibility. The tool makes sure that you get compatibility across different browsers. Below is how Parcel ensures browser compatibility Parcel integrates with Babel, and transpiles JavaScript code (ES6+ syntax) into a backward-compatible version. Consequently, it works with a diverse range of browsers. It can work with older browsers that do not support JavaScript. Installation of Parcel If you have Node.js and npm installed, you can install Parcel Bundler using the following command. // Installing Parcel Bundler globally npm install -g parcel-bundler Installing parcel globally helps you utilize the parcel command in any project folder. Conclusion Parcel Bundler is a trustworthy and efficient service for bundling web applications. Its features like zero-configuration, caching, and tree shaking give it an edge over its competitors. No matter if you are a beginner or an experienced web developer, you can leverage this technology to improve your productivity and efficiency. So what are you waiting for? Boost your web development workflow with this excellent bundling tool.
Types of NoSQL Databases: Everything You Need to Know About Them
NoSQL or Not Only SQL is a renowned database management system (DBMS) that manages a large volume of unstructured or semi-structured data. Since it eliminates various limitations of conventional relational databases, the NoSQL database has become popular. Google, Facebook, Amazon, and Netflix are some reputable companies that use NoSQL. This blog makes you aware of different types of NoSQL databases. In addition, you will learn their features. Before we move further, let’s find out how NoSQL is diverse from SQL. SQL vs. NoSQL Databases: Quick Comparison Type SQL databases are Relational Databases, while NoSQL databases are known as non-relational databases. Language of Query SQL databases use a Structured Query Language to do jobs like Delete, Select, Update, and Insert. On the other hand, NoSQL has its query language for manipulating data. NoSQL works on a framework or API, depending on the type of database. Expandability Traditional SQL databases are vertically scalable. You can enhance their performance by upgrading hardware. On the contrary, NoSQL databases are horizontally scalable from the ground up. Consequently, they are better at handling large amounts of data and traffic. Property Followed SQL follows ACID (Atomicity, Consistency, Isolation, and Durability) transactions when it comes to managing data integrity. NoSQL databases use the CAP theorem (Consistency, Availability, and Partition Tolerance). Types of NoSQL Databases We can categorize NoSQL databases into the following 4 types. Each has its pros and limitations. You can choose them based on your requirements. Let us learn about them in detail. Key Value Pair Database Key-Value Pair Database is one of the simplest types of NoSQL Databases. It is a non-relational database storing data elements in key-value pairs. Key-Value Pair Database can handle heavy loads of data. It stores data as a hash map and has two columns, i.e., the Key and the Value. Each database key is different, while the value can be String, Binary Large Objects, or JavaScript Object Notation. The three major features of the Key Value Pair Database are speed, straightforwardness, and scalability. Generally, this type of database is used for creating dictionaries, user profiles, user preferences, etc. Graph-Based Database The graph-based database helps users store entities and relations between those entities. Commonly, this database is used to store data on social networking websites, fraud detection systems, healthcare networks, and more. The graph-based database stores the data as a node. The connections between nodes are known as edges. Every edge and node has a different identifier. The database allows users to find the relationship between the data with the help of links. Unlike relational databases, graph-based databases are multi-relational. A few well-known graph-based databases are Flock DB, Neo4J, Infinite Graph, etc. All-in-all, we can say that a graph-based database stores, manages, and queries data as a graph structure. Column Oriented Database Column Oriented Database is a non-relational database. The database lets you store data in rows and read it row by row. It is like a collection of columns like we see in a table. Each column stores one type of information. The database reads and retrieves the data at high speed. You can run analytics on a limited number of columns to read those columns without consuming memory on unwanted data. Column Oriented Database performs queries like Count, SUM, AVG, and MIN quite quickly. Therefore, the database is used for analytics and reporting, data warehousing, and library card catalogs. Document-Oriented Database A document-oriented database is one of the prominent types of NoSQL databases. It stores and manages data like we organize documents in the real world. Although the data is stored and retrieved as a key-value pair, the value is stored as a document. The database uses the JSON, XML, or BSON documents to store the data. Users can store and retrieve documents from their networks in a form that is closer to the data objects. Therefore, negligible translation is needed to access and use data in an application. Document-Oriented Database supports flexible schema, scalability, and quick retrieval. MongoDB and Couchbase are two fine examples of these databases. This database is used in CMS (Content Management Systems), E-commerce websites, gaming applications, collaboration tools, etc. So these are four types of NoSQL databases. Let’s find out why this database system is getting popular. Features of NoSQL NoSQL has several advancements over traditional databases. We have listed a few significant ones. Compatible with Multiple Data Models Like relational databases, NoSQL is not strict. It can handle multiple data models. Additionally, the database can manage structured, semi-structured, and unstructured data with the same speed. Schema Flexibility Unlike conventional database systems, Not SQL databases do not require a fixed schema. It supports relaxed schemas. NoSQL is capable of managing different data formats and structures. As it does not have a strict predefined schema, it permits changes in data models. Scalable As mentioned above, the NoSQL database is scalable. Users can scale it horizontally by adding more modes and servers. Consequently, it is suitable for websites and web applications with continuously growing data. Excellent Uptime NoSQL databases have excellent uptime. They support serverless architecture and create multiple copies of data on various nodes. Consequently, businesses manage their database smoothly with minimal downtime. If one note breaks down, another takes its place and gives access to the data copy. Examples of NoSQL Now you know the different types of NoSQL databases and their uses. Below are some examples of them. Document Database MongoDB is a well-known document-oriented database. It stores data in JSON-like documents. MongoDB is popular for its scalability and flexibility. Column Database Apache Cassandra is a well-known column-based database system that handles large amounts of data across different commodity servers. Graph Database Amazon Neptune is a managed graph database service by AWS. It can work with both RDF graph and property graph models. Key-Value Database Amazon DynamoDB is a database service that provides high uptime and low-latency key-value storage. This service from Amazon Web Service is the epitome of a Key-Value type database. Conclusion Various types of NoSQL databases are a crucial
The Ultimate Guide to GitLab CI/CD: Along with Example of Building CI/CD Pipeline for Python
No one can deny the significance of CI (Continuous Integration) and CD (Continuous Deployment) in software development. They enable a coder to integrate and deploy software codes and identify possible issues simultaneously. Consequently, the process naturally saves the time and effort of a developer. While several platforms support CI/CD, GitLab has grown in popularity. It automates software development in several aspects. This guide makes you aware of the features of GitLab CI/CD. In addition, you will learn to build CI/CD pipelines on GitLab. So let us get started. What is GitLab CI/CD? CI stands for Continuous Integration, while CD for Continuous Deployment/Delivery. CI supports the continuous integration of code changes from various contributors into a shared repository. On the other hand, CD allows code deployment while being developed. GitLab CI/CD is a set of tools and techniques automating software development. It enables users to create, test and deploy code changes inside the GitLab to the end users. The platform aims to support consistent workflow and improve the speed and quality of code. Features of GitLab CI/CD GitLab has several benefits over conventional software development methods. Some key benefits are as per below: ⦁ GitLab keeps CI/CD and code management in the same place. ⦁ It’s a cloud-hosted platform. You do not need to worry about setting up and managing databases or servers. ⦁ You can sign up for the subscription plan that suits your budget. ⦁ You can run different types of tests, such as unit tests, integration tests, or end-to-end tests. ⦁ GitLab automatically builds and tests your code changes as they are pushed to the repository. ⦁ Since GitLab CI/CD is built-in, there is no need for plugin installation. ⦁ The platform supports continuous code collaboration and version control. The Architecture of GitLab CI/CD GitLab CI/CD architecture consists of the following components: GitLab Server Like every online platform, GitLab works on a server. The GitLab server is accountable for hosting all your Git repositories. It helps you keep your data on the server for your client and team. The GitLab server hosts your applications and configures the pipeline. It also manages the pipeline execution and assigns jobs to the runners available. GitLab.com is run by a GitLab instance that further comprises an application server, database, file storage, background workers, etc. Runners Runners are applications that run CI/CD pipelines. GitLab has several runners configured. Every user can access these runners on gitlab.com. Users are allowed to set up their own GitLab runners. Jobs Jobs are tasks performed by the GitLab pipeline. Each job has a unique name and script. Each script gets finished one after the other. A user moves on to the next one only when the previous one is complete. Stages Stages are referred to the differences between jobs. They ensure the completion of jobs in the pipeline. For instance, testing, building, and deploying. Pipeline The pipeline is a complete set of stages. Every stage comprises single or multiple jobs. You can find various types of pipelines in GitLab. These types include basic pipelines, multi-branch pipelines, merge request pipelines, parent-child pipelines, scheduled pipelines, multi-job pipelines, etc. Commit Commit is a record of changes made in the code or files. It is similar to what we see in a GitHub repository. So this is an architecture of GitLab CI/CD. Let us learn how to build a simple CI/CD pipeline with GitLab. Building a Simple CI/CD pipeline for a Python Application 1. First, create an account on GitLab. 2. Next, create a new project. You get four different options to create your project. Choose any method convenient to you. In this example, we will import the project from GitHub. 3. Once the project is set up, create a yaml file. Give it a name that is easy to remember. For example, .gitlab-ci.yml. Above is an example of tests run. Image: It is the image we intend to use to execute our script. before_script: Before script helps you install the prerequisites required to run your scripts. It also includes commands you need to run before the script command. after_script: This script outlines commands running after each job. It may also include failed jobs handling. To add the Python image, we are using images available on DockerHub. 4. Under the CI/CD tab, you will find the ‘Jobs’ tab to get detailed logs and troubleshooting. 5. Next, create an account on DockerHub. You can find the image for Docker on Dockerhub. 6. Go back to the yaml script and write a script to upload the docker image to the repository. You will need to use credentials. To ensure the safety of credentials, use another GitLab feature. Go to Settings-> CI/CD-> Variables Here you can make global variables that you can refer to in the code. If you use the masked variable option, it will prevent the visibility of variable content in logs. 7 Next, upload the image to a private repository. Tag the repository name in Dockerhub. It will help you when writing the Docker push command. The stage clause guarantees that each stage will execute one after another. You can create variables both globally and inside the jobs. You can use them as: $var1 8. In our example, we are following docker in the docker concept. It means we have to make docker available inside its container. The docker client and daemon are inside the container to execute the docker command. 9. Now it is time for the preparation of the deployment server. The process involves configuring the tools and settings to automate the deployment. You can use any remote server. In this example, we are using an Ubuntu server. 10. We used the following command to create a private key. ssh-keygen The method to create a private variable is the same as mentioned in step 6. 11. Next, add the yaml script. Before using the docker run command, stop existing containers. Especially those running on the same port. For this, we have added line 37. By default,
Kubernetes Persistent Volumes — 5 Detailed Steps to Create PVs
If you want to persist data in Kubernetes, you may utilize the readable and writable disk space available in Pods as a convenient option. But one thing you must know is that the disk space depends on the lifecycle of Pod. Unsurprisingly, your application development process features independent storage available for every node and can handle cluster crashes. Kubernetes Persistent Volumes got your back with their independent lifecycle and great compatibility for stateful applications. This article will lead you to 5 extensive steps to create and implement persistent volumes in your cluster. Before that, let’s dig down to know what exactly persistent volumes in Kubernetes are along with some important terms! Persistent Volumes in Kubernetes A Kubernetes Persistent Volume is a provisioned storage in a cluster and works as a cluster resource. It’s a volume plugin for Kubernetes with an independent lifecycle and no dependency on the existence of a particular pod. Unlike containers, you can read, write and manage your databases without worrying about disk crashes because of restart or termination of the pod. As a shared unit, all the containers in a pod can access the PV and can restore the database even if an individual container crashes. Here are some important terms you must know! Access Modes The accessModes represent the nodes and pods that can access the volume. The field ReadWriteOnce defines every pod having access to read and write the data in a single mode. If you’re using Kubernetes v1.22, you can read or write access on a single node using ReadWriteOncePod. Volume Mode The volumeMode field is mounting functionality of volume into the pods based on a pre-set directory. It defines the behaviour of volume in each Filesystem of a pod. Alternatively, you can use a volume as a raw block storage without any configuration with a Block field. Storage Classes As the name describes, storage classes are the different storage types you can use according to the hosting environment of your cluster. For instance, you can choose azurefile-csi for Microsoft Azure Kubernetes (AKS) clusters while do-block-storage is great for DigitalOcean Managed Kubernetes. Creating a Persistent Volume Step 1: YAML file The process of creating Kubernetes persistent volumes starts with creating a YAML file. The storage configuration represents a simple persistent volume of 1 Gi capacity. Here’s how you can create a YAML file for your PV in Kubernetes: apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: accessModes: ReadWriteOnce capacity: storage: 1Gi storageClassName: standard volumeMode: Filesystem Step 2: Adding Volume to the Cluster Once you have created the Persistent Volume, you can add your new persistent volume to your cluster. We recommend using Kubectl for this to make it easier. To add new persistent volume, run: $ kubectl apply -f pv.yaml If you see the following error message while running the command, The PersistentVolume "example-pv" is invalid: spec: Required value: must specify a volume type Try using dynamic volume creation which will automatically create a persistent volume whenever it’s used. That’s because the cloud providers usually restrict allocating inactive storage in the cluster and dynamic volume can be your good-to-go option. Step 3: Linking Volumes to Pods Linking PVs with the pods requires the request to read/write files in a volume. Here the Persistent Volume Claim (PVC) can get you access to the example-pv volume. Let’s see how an example volume claim looks like! apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-pvc spec: storageClassName: "" volumeName: example-pv As discussed above, you may need dynamic volume creation in some scenarios. You can request a claim for that in the way mentioned below. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-pvc spec: accessModes: – ReadWriteOnce resources: requests: storage: 1Gi storageClassName: standard Now, you have unlocked accessModes and storageClassName fields after the claim. All you need to do is to apply the claim to your cluster using Kubectl. Run the following command to quickly apply the claim to your cluster. $ kubectl apply -f pvc.yaml persistentvolumeclaim/example-pvc created In the last, use the volumes and volumeMount fields to link the claim to your pods. This will add pv to your containers section of the manifest and make the files overlive the container instances. To link the claim, run: apiVersion: v1 kind: Pod metadata: name: pod-with-pvc spec: containers: name: pvc-container image: nginx:latest volumeMounts: – mountPath: /pv-mount name: pv volumes: – name: pv persistentVolumeClaim: claimName: example-pvc Step 4: Demonstrating Persistence In the demonstration, you can verify the behaviour of PV in different scenarios. Let’s take a quick example for better understanding. Get a shell to the pod: $ kubectl exec –stdin –tty pod-with-pvc — sh Write a file to the /pv-mount directory mounted to: $ echo "This file is persisted" > /pv-mount/demo Detach the file from the container: $ exit Delete the pod using kubectl: $ kubectl delete pods/pod-with-pvc pod "pod-with-pvc" deleted Recreate the pod: $ kubectl apply -f pvc-pod.yaml pod/pod-with-pvc created Get a shell to the container and read the file: $ kubectl exec –stdin –tty pod-with-pvc — sh $ cat /pv-mount/demo This file is persisted Step 5: Managing Persistent Volumes Kubectl allows you to manage your Kubernetes Persistent Volumes whether you want to retrieve a list or remove a volume. To retrieve a list of PVs, run: $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-f90a46bd-fac0-4cb5-b020-18b3e74dd3b6 1Gi RWO Delete Bound pv-demo/example-pvc do-block-storage 7m52s Review persistent volume claims: $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE example-pvc Bound pvc-f90a46bd-fac0-4cb5-b020-18b3e74dd3b6 1Gi RWO do-block-storage 9m Sometimes, a volume or PV claim may show a Pending status as the storage class is yet to provision storage. But you can check what’s slowing down the claim process in object’s event history with describe command. $ kubectl describe pvc example-pvc … Events: Type Reason Age From Message —- —— —- —- ——- Normal Provisioning 9m30s dobs.csi.digitalocean.com_master_68ea6d30-36fe-4f9f-9161-0db299cb0a9c External provisioner is provisioning volume for claim "pv-demo/example-pvc" Normal ProvisioningSucceeded 9m24s dobs.csi.digitalocean.com_master_68ea6d30-36fe-4f9f-9161-0db299cb0a9c Successfully provisioned volume pvc-f90a46bd-fac0-4cb5-b020-18b3e74dd3b6 Conclusion: By combining Kubernetes and Persistent Volumes, you can effectively and easily