top of page

142 items found for ""

  • Monitoring Updates at AWS Re:Invent 2023

    Welcome to our recap of the exciting monitoring announcements made during the AWS Re:Invent 2023 event in Las Vegas! 1. Natural Language Query in Amazon CloudWatch In an exciting advancement, AWS has introduced a natural language query feature for Amazon CloudWatch. Now you can make more intuitive and expressive queries across logs and metrics. This makes it easier to extract valuable information from your logs and metrics. https://aws.amazon.com/blogs/aws/use-natural-language-to-query-amazon-cloudwatch-logs-and-metrics-preview/ 2. Amazon Managed Service for Prometheus Collector The new feature "Amazon Managed Service for Prometheus Collector" is here to simplify metric collection in Amazon EKS environments. The highlight is metric collection without the need for additional agents. Interested in simpler management of your metrics in EKS? This is a must-read. https://aws.amazon.com/blogs/aws/amazon-managed-service-for-prometheus-collector-provides-agentless-metric-collection-for-amazon-eks/ 3. Metric Consolidation with Amazon CloudWatch In an effort to address hybrid and multicloud challenges, AWS has introduced a new capability for Amazon CloudWatch. You can now consolidate your metrics from hybrid, multicloud, and on-premises environments in one place. This provides a more comprehensive view and makes resource management easier. https://aws.amazon.com/blogs/aws/new-use-amazon-cloudwatch-to-consolidate-hybrid-multi-cloud-and-on-premises-metrics/ Conclusion These advancements enhance user experience, simplify operations, and offer a consolidated perspective across diverse cloud setups. Exciting times lie ahead in the landscape of AWS monitoring! Martín Carletti Cloud Engineer Teracloud

  • What C levels must know about their IT in the age of AI

    A recent comprehensive survey by Cisco underscores a critical insight: the majority of businesses are racing against time to deploy AI technologies, yet they confront significant gaps in readiness across key areas. This analysis, drawn from over 8,000 global companies, reveals an urgent need for enhanced AI integration strategies. See the original survey at Cisco global AI readiness survey, but if you want to know how to apply this information in your business today, keep reading. Key Findings Practical Steps for AI Integration Final Thoughts Key Findings - 97% of businesses acknowledged increased urgency to deploy AI technologies in the past six months. - Strategic time pressure: 61% believe they have a year at most to execute their AI strategy to avoid negative business impacts. - Readiness gaps in strategy, infrastructure, data, governance, talent, and culture, with 86% of companies not fully prepared for AI integration. The report highlights an AI Readiness Spectrum to categorize organizations: 1. Pacesetters: Leaders in AI readiness 2. Chasers: Moderately prepared 3. Followers: Limited preparedness 4. Laggards: Significantly unprepared This classification mirrors our approach at Teracloud using the Datera Data Maturity Model (D2M2) which we use to guide our customers towards data maturity and AI readiness. Practical Steps for AI Integration Let’s explore some recommendations that will help prepare your organization for the AI era. Develop a Robust Strategy - Prioritize AI in your business operations. The urgency is evident, with a substantial majority of businesses feeling the pressure to adopt AI technologies swiftly. - Create a multi-faceted strategy that addresses all key pillars simultaneously. You can use our D2M2 framework and cover all your bases. Alternatively, you can base your strategy on the generic AWS Well-Architected Framework Ensure Data Readiness - Recognize the critical role of 'AI-ready' data. Data serves as the AI backbone, yet it’s often the weakest link, not because we don't have data but because it isn’t accessible. - Tackle data centralization issues to leverage AI's full potential. Using cloud tools you can still have the information scattered. Consume it using a single endpoint, for instance using Amazon Athena and other data-at-scale tools. - Facilitate seamless data integration across multiple sources. Employing tools like AWS Glue can help in automating the extraction, transformation, and loading (ETL) processes, making diverse data sets more cohesive and AI-ready. Upgrade Infrastructure and Networking - To accommodate AI's increased power and computing demands, over two-thirds (79 percent) of companies will require further data center graphics processing units (GPUs) to support current and future AI workloads. - AI systems require large amounts of data. Efficient and scalable data storage solutions, along with robust data management practices, are essential. - Fast and reliable networking is necessary to support the large-scale transfer of data and the intensive communication needs of AI systems. - Enhance IT infrastructure to support increasing AI workloads. - Focus on network adaptability and performance to meet future AI demands. Implement Robust Governance and security - Develop comprehensive AI policies, considering data privacy, sovereignty, bias, fairness, and transparency. - AI-related regulations are evolving. A flexible governance strategy allows the organization to quickly adapt to new laws and standards. - A solid governance framework is necessary to ensure AI is used ethically and responsibly, adhering to ethical guidelines and standards. - Prioritize data security and privacy. Utilize AWS’s comprehensive security tools like AWS Identity and Access Management (IAM) and Amazon Cognito to safeguard sensitive data, a crucial aspect when deploying AI applications. Focus on Talent Development - Address the digital divide in AI skills. While most companies plan to invest in upskilling, there's skepticism about the availability of talent. - Emphasize continuous learning and skill development. Cultivate a Data-Centric Culture - Embrace a culture that values and understands the importance of data for AI applications. - Address data fragmentation: Over 80% of organizations face challenges with siloed data, a major impediment to AI effectiveness. Understanding these findings is just the first step. Implementing them requires a strategic approach, one that we champion through our Datera Data Maturity Model (D2M2). Our model not only aligns with Cisco's categorizations but also offers a roadmap for businesses to evolve from AI Followers to Pace setters. For a deeper dive into the Cisco survey, access the full report: Cisco Global AI Readiness Survey. To know more about how Teracloud helps its customers enter the Generative AI era, please contact us. Final Thoughts Adopting AI is no longer optional but a necessity for competitive advantage. By focusing on the six pillars of AI readiness, companies can transform challenges into opportunities, steering towards a future where AI is not just an ambition but a tangible asset driving business success. Carlos José Barroso Head of DataOps Teracloud If you want to know more about Generative AI with AWS, contact us at info@teracloud.io. If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Get your first job in IT with AWS Certifications

    Could you land your first job with just AWS certifications and no experience at all? Almost… but not exactly. The following explores how helpful an AWS Certification is when landing your first job in IT, and why it’s so important not to fall for the “only certifications will guarantee you a job” trap. An AWS certification is a professional credential offered by Amazon Web Services (AWS) that validates an individual's knowledge and expertise in various AWS cloud computing services and technologies. These certifications are designed to demonstrate a person's proficiency in using AWS services and solutions to design, deploy, and manage cloud-based applications and infrastructure. It's proof that you know how to use Amazon Web Services and understand cloud concepts. That said, one could deduce that obtaining these certifications is a really good way to demonstrate your knowledge, and stand out among your peers. But is that all? AWS Partners would disagree. What are AWS Partners? AWS partners are organizations that collaborate with AWS to offer a wide range of services, solutions, and expertise related to AWS cloud computing. AWS partners come in various forms and play critical roles in helping businesses leverage AWS services to meet their unique needs. In other words, partners are companies that help AWS implement their services. You have different partner tiers: AWS Select Tier Services Partners AWS Advanced Tier Services Partners AWS Premier Tier Services Partners The equation is really simple: The more qualified you are, the more clients you get. The more clients you get, the more money the company makes. Therefore it’s in an AWS Partner's best interest to become more specialized, and that's where certifications come into play. To become a specialized partner among other things, you need technical certified individuals. As you can see, being an AWS Premier Partner, companies require 25 individuals be certified. And that’s why having a certification becomes really valuable, even more if it’s a Professional or Specialty one. Other Benefits There are even badges for how many certifications a partner has, which give more credibility to the provided service. There are other benefits as a partner such as being eligible to earn credits for the client. That means, receiving hundreds or even thousands in financing through credits for you to offer your clients. Final thoughts To sum up, if you don’t have any experience at all, having an AWS Certification will really help you to obtain interviews and if you combine the knowledge acquired with real case scenarios you’ll be closer to landing your dream job. If on the other hand you only obtain the certification yet don’t have any practical abilities or field work, the certificate won’t really help at all. Strategize. Find companies that are AWS Partners and apply to them. They’re looking for technical individuals and you’re looking for real case scenarios. It’s in real-life Cloud challenges where you actually get to apply your knowledge and ultimately gain the confidence and proof you’ll need to continue developing your professional skills. Ignacio Bergantiños Cloud Engineer Teracloud If you want to know more about AWS, we suggest going check How to apply for Amazon's Service Delivery Program (SDP) If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to protect your SSH and SCP Connections with AWS Sessions Manager in 4 simple steps

    In certain scenarios, establishing secure SSH or SCP connections with EC2 instances within our protocol becomes necessary. AWS Sessions Manager offers a robust solution to accomplish this, allowing us to avoid the exposure of critical ports and enhance overall security. Step# 1: Install the latest version of the AWS CLI and the AWS Sessions Manager plugin To begin, install the latest versions of the AWS CLI and the Sessions Manager plugin. The following links provide detailed instructions for installation: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html Step# 2: Modify ssh config file Locate your SSH config file, which can be found at "~/.ssh" for Linux and Mac distributions, or "C:\Users.ssh" for Windows. Add the following line to the config file: host i-* mi-* ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'" Step# 3: Configure the SSM instance and the EC2 instance profile of your instances Follow the SSM agent installation instructions provided in the documentation: https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-manual-agent-install.html In my case, I’m installing it on an Ubuntu machine with the following commands: sudo snap install amazon-ssm-agent --classic sudo snap list amazon-ssm-agent Additionally, attach the AmazonSSMManagedInstanceCore policy to the EC2 instances you wish to access, ensuring the necessary permissions for AWS Systems Manager service core functionality. Step# 4: Start SSH/SCP session in your local environment Before initiating SSH/SCP sessions with your EC2 instances, specify your AWS Profile or the region of the EC2 instances if you are using temporal credentials using the following command: export AWS_REGION= export AWS_PROFILE= # ssh command ssh -i id_rsa ubuntu@i-xxxxxxxxx # scp command scp -i id-rsa ubuntu@i-xxxxxxxxx:/ By following these steps, you can confidently protect your SSH and SCP connections using AWS Sessions Manager. This comprehensive guide empowers you to establish secure access while minimizing potential security risks. Happy coding and see you next time, in the Cloud! Juan Bermudez Cloud Engineer Teracloud If you want to know more about Cloud Security, we suggest going check Best Security Practices, Well-Architected Framework If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to apply for Amazon's Service Delivery Program (SDP)

    Amazon's Service Delivery Program (SDP) presents an exciting opportunity for service providers looking to work with one of the world's most influential tech giants. By joining the SDP, companies can establish strong relationships with Amazon Web Services (AWS) and access a global audience. However, the competition is fierce, and preparation is key to standing out in the application process. In this guide, we will explore essential tips and considerations for successfully applying to Amazon's SDP. 1. Understand the Program Requirements Before you embark on your journey to apply for Amazon's Service Delivery Program (SDP), it's crucial to have a comprehensive understanding of the program's requirements. These requirements serve as the foundation for your application, ensuring that you align with Amazon's expectations and can provide the level of service they seek. Here's an expanded breakdown of what this entails: Technical Expertise: Amazon's SDP is geared towards service providers who possess a deep understanding of Amazon Web Services (AWS). This means you should have a proven track record of working with AWS technologies, deploying solutions, and managing AWS resources effectively. Your technical expertise should extend to various AWS services and use cases. Certifications: AWS certifications are a testament to your knowledge and proficiency in AWS. Depending on the specific services you plan to deliver as part of the SDP, having relevant certifications can significantly bolster your application. Certifications demonstrate your commitment to continuous learning and your ability to stay updated with the latest AWS developments. Referenceable Clients: References from satisfied clients can be a powerful asset in your application. These references should be able to vouch for your capabilities, professionalism, and the positive impact your services have had on their AWS environments. Having a diverse range of referenceable clients from various industries can demonstrate your versatility and ability to adapt to different contexts. Business Practices: Amazon values partners who uphold high standards of business ethics and professionalism. Your company's business practices, including responsiveness, communication, and customer-centric approaches, should align with Amazon's values. A strong reputation in the industry for integrity and reliability can enhance your application's credibility. AWS Partnership Tier: Depending on the tier of partnership you aim to achieve within the SDP, there might be specific requirements to fulfill. Higher partnership tiers often require a deeper level of engagement with AWS, which could include meeting revenue targets, demonstrating a significant number of successful customer engagements, and showing a commitment to driving AWS adoption. 2. Demonstrate AWS Expertise As you navigate the application process for Amazon's Service Delivery Program (SDP), highlighting your expertise in Amazon Web Services (AWS) is a fundamental aspect that can set your application apart. Demonstrating your in-depth understanding of AWS technologies and your ability to leverage them effectively is key. Here's a comprehensive exploration of how to effectively showcase your AWS expertise: Project Portfolio: Provide a detailed portfolio of projects that showcases your hands-on experience with AWS. Highlight a variety of projects that demonstrate your proficiency across different AWS services, such as compute, storage, networking, security, and databases. Include project descriptions, the challenges you addressed, the solutions you implemented, and the outcomes achieved. Architectural Excellence: Describe how you've designed AWS architectures to meet specific business needs. Explain the decision-making process behind architecture choices, scalability considerations, fault tolerance measures, and security implementations. Highlight instances where your architectural decisions led to optimized performance and cost savings. Use Cases: Illustrate your familiarity with a range of AWS use cases. Detail scenarios where you've successfully deployed AWS solutions for tasks like application hosting, data analytics, machine learning, Internet of Things (IoT), and serverless computing. Showcase your ability to tailor AWS services to diverse client requirements. Problem Solving: Provide examples of how you've troubleshooted and resolved complex issues within AWS environments. Discuss instances where you identified bottlenecks, optimized performance, or resolved security vulnerabilities. This demonstrates your ability to handle real-world challenges that can arise during service delivery. AWS Best Practices: Emphasize your adherence to AWS best practices in terms of security, compliance, performance optimization, and cost management. Discuss how you've implemented well-architected frameworks and followed AWS guidelines to ensure the reliability and scalability of your solutions. 3. Focus on Innovation and Quality Amazon seeks partners who not only meet standards but also bring innovation and quality to the table. In your application, showcase how your company adds unique value through innovative approaches and how you ensure quality in every service you offer. Continuous Improvement: Highlight your commitment to continuous improvement in your services. Describe how you actively seek feedback from clients and incorporate their input to refine and enhance your solutions. Emphasize your agility in adapting to changing client needs and industry trends. Metrics of Success: Quantify the success of your innovative solutions with relevant metrics. If your solution improved performance, reduced costs, or increased revenue for your clients, provide specific numbers and percentages to highlight the tangible impact of your work. Quality Assurance: Describe your quality assurance processes and methodologies. Explain how you ensure that your solutions meet the highest standards in terms of functionality, security, and performance. Highlight any certifications, industry standards, or best practices you adhere to. Collaboration with Clients: Showcase instances where you collaborated closely with clients to co-create innovative solutions. Discuss how you facilitated workshops, brainstorming sessions, and prototyping activities to bring their ideas to life while adding your expertise. 4. Prepare Strong References Solid references from past clients are a vital component of your application. Select references that can vouch for your capabilities and achievements in delivering AWS services. Make sure you have authentic testimonials that highlight your professionalism and skills. 5. Articulate Your Value Proposition Clearly explain why your company is the right choice for the SDP. What makes your approach unique? How will your collaboration benefit Amazon and AWS customers? Articulate your value proposition concisely and convincingly. 6. Preparation and Detailed Review Thorough preparation and meticulous review are crucial steps in the application process for Amazon's Service Delivery Program (SDP). Any grammatical errors or inaccuracies in your application could impact the impression you make on Amazon's evaluators. Here's a detailed exploration of how to approach these aspects: Organized Structure: Organize your application coherently and clearly. Divide your content into distinct sections such as past experience, value proposition, project examples, and references. Use headers and bullet points to enhance readability and highlight key points. Relevant Content: Ensure that each section of your application is relevant to the requirements of the SDP. Avoid including redundant information or content that does not directly contribute to demonstrating your experience and capability to deliver quality services on AWS. Accurate Information: Verify that all provided information is accurate and up-to-date. Including incorrect or outdated information can affect the credibility of your application. Exemplary Stories: In the past experience section, choose project stories that exemplify your achievements and capabilities. Provide specific details about challenges you faced, how you overcame them, and the tangible results you achieved. Professional Language: Maintain a professional and clear tone throughout your application. Avoid unnecessary jargon or overly technical language that might hinder understanding for evaluators who may not be experts in all technical areas. Reflection and Context: Don't just list achievements, but also provide context and reflection on your experience. Explain why certain projects were challenging or why you chose specific approaches. This demonstrates your ability to think critically and learn from experiences. Grammatical Review: Carefully review your application for grammatical and spelling errors. A professionally written and well-edited application showcases your attention to detail and seriousness. Consistent Formatting: Maintain consistent formatting throughout the application. Use the same font, font size, and formatting style throughout the document to create a coherent and professional presentation. External Feedback: Consider asking colleagues or mentors to review your application. Often, an extra set of eyes can identify areas for improvement that you might have overlooked. Deadlines and Submission: Ensure you meet the deadlines set by Amazon and submit your application according to the provided instructions. Applying for Amazon's SDP is an exciting opportunity but requires careful planning and preparation. By following these tips and considerations, your application will be well on its way to standing out among competitors and establishing a strong partnership with Amazon Web Services. Remember that authenticity, AWS expertise, and a clear value proposition are key elements to impressing in the selection process. Best of luck in your application to Amazon's SDP! For more info: https://aws.amazon.com/partners/programs/service-delivery/?nc1=h_ls Julian Catellani Cloud Engineer Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Secure Your Data with SOC 2 Compliant Solutions

    In today's digital landscape, where data breaches and cyber threats have become increasingly sophisticated, protecting sensitive information is of paramount importance. One effective approach that organizations are adopting to ensure the security of their data is by implementing SOC 2-compliant solutions. In this article, we'll delve into what SOC 2 compliance entails, its significance for safeguarding data, and how businesses can benefit from adopting such solutions. Table of Contents Understanding SOC 2 Compliance Key Components of SOC 2 Compliance Who Needs SOC 2 Compliance? In an era where data breaches can lead to devastating financial and reputational losses, companies must adopt robust strategies to safeguard their sensitive information. SOC 2 compliance offers a comprehensive framework that helps organizations fortify their data security measures. By adhering to the SOC 2 standards, companies can not only protect themselves from potential cyber threats but also gain a competitive edge in the market. Understanding SOC 2 Compliance What is SOC 2? SOC 2, or Service Organization Control 2, is a set of stringent compliance standards developed by the American Institute of CPAs (AICPA). It focuses on the controls and processes that service providers implement to ensure the security, availability, processing integrity, confidentiality, and privacy of customer data. Unlike SOC 1, which assesses financial controls, SOC 2 is geared towards evaluating the effectiveness of a company's non-financial operational controls. Why is SOC 2 Compliance Important? SOC 2 compliance is crucial because it reassures customers, partners, and stakeholders that a company has established rigorous security measures to protect sensitive data. As data breaches continue to make headlines, consumers are becoming more cautious about sharing their information with businesses. SOC 2 compliance demonstrates a commitment to data protection, enhancing trust and credibility. Key Components of SOC 2 Compliance Security Security is a foundational component of SOC 2 compliance. It involves implementing safeguards to protect against unauthorized access, data breaches, and other security threats. This includes measures such as multi-factor authentication, encryption, and intrusion detection systems. Availability Businesses must ensure that their services are available and operational when needed. SOC 2 compliance assesses the measures in place to prevent and mitigate service interruptions, ranging from robust infrastructure to disaster recovery plans. Processing Integrity Processing integrity focuses on the accuracy and completeness of data processing. Companies must have controls in place to ensure that data is processed correctly, preventing errors and unauthorized modifications. Confidentiality Confidentiality revolves around protecting sensitive information from unauthorized disclosure. This includes customer data, intellectual property, and other confidential information. Privacy Privacy controls are vital for businesses that handle personally identifiable information (PII). SOC 2 compliance evaluates whether a company's practices align with relevant data privacy regulations. Who Needs SOC 2 Compliance? SaaS Companies Software-as-a-Service (SaaS) companies often handle a vast amount of customer data. Achieving SOC 2 compliance is essential for building trust and attracting clients concerned about the security of their data. Cloud Service Providers Cloud service providers store and process data for various clients. SOC 2 compliance demonstrates their commitment to ensuring the security, availability, and privacy of customer data. Data-Centric Businesses Companies that rely heavily on data, such as e-commerce platforms or healthcare providers, need SOC 2 compliance to protect customer information and maintain legal requirements. Stay tuned for the rest of the article, where we will delve deeper into achieving SOC 2 compliance, its benefits, and its challenges, as well as a comparison with other compliance frameworks. Paulo Srulevitch Content Creator Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to integrate Prometheus in an EKS Cluster as a Data Source in AWS Managed Grafana

    Whether you're an experienced DevOps engineer or just starting your cloud journey, this article will equip you with the knowledge and tools needed to effectively monitor and optimize your EKS environment. Objective Configure and use Prometheus to collect metrics on an Amazon EKS cluster and view those metrics in AWS Managed Grafana (AMG). Provide usage instructions and an estimate of the costs of connecting Prometheus metrics as an AMG data source. Let’s assume that fluentbit is already configured on the EKS cluster. Step #1: Prometheus Configuration Ensure Prometheus is installed and running in your Amazon EKS cluster. You can install via Terraform using helm chart. Verify that Prometheus is successfully collecting metrics from your cluster nodes and applications. Step #2: Configure the data source in Grafana Now you’ll need to configure the data source in Grafana. (Here the created LoadBalancer will serve as a reference) Make sure the AWS Route53 console is open, and a private Hosted Zone named "monitoring.domainname" is created. Inside this Hosted Zone, create an Alias record pointing to the LoadBalancer previously mentioned. This data will be used to configure the Prometheus service as the data source in AMG. AWS Managed Grafana Configuration Provision an instance of AWS Managed Grafana. Access the AWS Managed Grafana console and obtain the URL to access the Grafana instance. Ensure you have the necessary permissions to manage data sources in AWS Managed Grafana. Configure Prometheus as a Data Source in AWS Managed Grafana: Access the AWS Managed Grafana console using the URL obtained in step 2. Navigate to the "Configuration" section and select "Data sources". Click on "Add data source" and choose "Prometheus" as the data source type. Complete the required fields, including the Prometheus endpoint URL and authentication credentials if applicable, or a Workspace IAM Role. Save the data source configuration. Visualizing Metrics in Grafana: Identify the KPIs needed to visualize in the dashboard Create dashboards in Grafana to visualize the metrics collected by Prometheus. Utilize Grafana's query and visualization options to create customized visualizations of your metrics. Explore different panel types such as graphs, tables, and text panels to present the information in a clear and understandable manner. Step #3: Estimate costs To estimate the costs associated with integrating Prometheus as a data source in AWS Managed Grafana, consider the following: AWS Managed Grafana Costs: Refer to the AWS documentation to understand the details and pricing associated with AWS Managed Grafana. According to the following documentation, the price is per license, either editor or user. The editor can create and edit both the workspace and the metric display and the user can only view the panels and metrics previously configured by the editor. (https://aws.amazon.com/es/grafana/pricing/). Today, the editor license cost $9 and the user license cost $5 Storage Costs: If AWS Managed Grafana utilizes additional storage to store metrics collected by Prometheus, refer to the AWS documentation for information on pricing and available storage options. Remember that costs may vary depending on your specific configuration and the AWS region where your AWS Managed Grafana instance is located. Consult the documentation and updated pricing details for accurate cost estimation. Final thoughts In conclusion, this is a very interesting and easy-to-implement alternative to replace solutions in clusters that have a large number of running pods. This scenario generates an even larger number of metrics, and that's where this processing-based and licensed solution becomes much more cost-effective compared to a metric-based pricing model. Martín Carletti Cloud Engineer Teracloud If you want to know more about EKS, we suggest going check Cross account access to S3 using IRSA in EKS with Terraform as IaaC If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to use a secondary CIDR in EKS

    In this Teratip, we’ll deep dive into the realm of secondary CIDR blocks in AWS EKS and explore how they can empower you to enhance your cluster's flexibility and scalability. We’ll uncover the benefits of leveraging secondary CIDR blocks and walk you through the process of configuring them. Introduction As you expand your applications and services, you may face scenarios where the primary CIDR block assigned to your EKS cluster becomes insufficient. Perhaps you're introducing additional microservices, deploying multiple VPC peering connections, or integrating with legacy systems that have their own IP address requirements. These situations call for a solution that allows you to allocate more IP addresses to your cluster without sacrificing stability or network performance. Secondary CIDR blocks provide an elegant solution by enabling you to attach additional IP address ranges to your existing VPC, thereby expanding the available address space for your EKS cluster. Throughout this post, we’ll go over the step-by-step process of adding secondary CIDR blocks to your AWS EKS cluster. Create the EKS cluster For this demonstration, I created a simple EKS cluster with only one node and deployed the famous game 2048 which can be accessed through an Internet-face Application Load Balancer. So, the EKS cluster and workload looks like this: And this is the VPC where the cluster is located: As you can see, this VPC has the 10.0.0.0/16 IPv4 CIDR block assigned. Keeping this in consideration, all pods into the cluster will get an IP address within this range: Next, we will configure this same cluster to use a secondary CIDR block in the same VPC. In this way, almost all pods will get IP addresses from the new CIDR. Step by step process Step #1: Create the secondary CIDR within our VPC RESTRICTION: EKS supports additional IPv4 CIDR blocks in the 100.64.0.0/16 range. It’s possible to add a second CIDR to the current VPC, and of course create subnets within this VPC using the new range. I did it through Terraform but this can be done using the AWS Console as well. The code that I used to create the VPC is the following: module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "3.19.0" name = "teratip-eks-2cidr-vpc" cidr = "10.0.0.0/16" secondary_cidr_blocks = ["100.64.0.0/16"] azs = slice(data.aws_availability_zones.available.names, 0, 2) private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "100.64.1.0/24", "100.64.2.0/24"] public_subnets = ["10.0.4.0/24", "10.0.5.0/24"] enable_nat_gateway = true single_nat_gateway = true enable_dns_hostnames = true public_subnet_tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" "kubernetes.io/role/elb" = 1 } private_subnet_tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" "kubernetes.io/role/internal-elb" = 1 } } Take a look to the line secondary_cidr_blocks = ["100.64.0.0/16"] and the private subnets (the last two) created into this CIDR: private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "100.64.1.0/24", "100.64.2.0/24"] The obtained VPC looks like this: Step #2: Configure the CNI DEFINITION: CNI (Container Network Interface), concerns itself with network connectivity of containers and removing allocated resources when the container is deleted. In order to use a secondary CIDR into the cluster it’s needed to configure some environment variables in the CNI daemonset configuration, by running the following commands: 1. To turn on custom network configuration for the CNI plugin, run the following command: kubectl set env ds aws-node-n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true 2. To add the ENIConfig label for identifying your worker nodes, run the following command: kubectl set env daemonset aws-node-n kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone 3. Enable the parameter to assign prefixes to network interfaces for the Amazon VPC CNI DaemonSet. kubectl set env daemonset aws-node-n kube-system ENABLE_PREFIX_DELEGATION=true Terminate worker nodes so that Autoscaling launches newer nodes that come bootstrapped with custom network config. Step #3: Create the ENIconfig resources for the new subnets As next step, we will add custom resources to ENIConfig custom resource definition (CRD). In this case, we will store VPC Subnet and SecurityGroup configuration information in these CRDs so that Worker nodes can access them to configure VPC CNI plugin. Create custom resources for each subnet by replacing Subnet and SecurityGroup IDs. Since we created two secondary subnets, we need to create three custom resources. --- apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: us-east-2a spec: securityGroups: -sg-087d0a0ece9800b00 subnet: subnet-0fabe93c6f43f492b --- apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: us-east-2b spec: securityGroups: -sg-087d0a0ece9800b00 subnet: subnet-0484194486fad2ce3 Note: The ENIConfig name must match the Availability Zone of your subnets. You can get the Cluster security group included in the ENIConfigs in the EKS Console or by running the following command: aws eks describe-cluster --name $cluster_name --querycluster.resourcesVpcConfig.clusterSecurityGroupId --output text Once ENIConfigs YAML is created apply them: kubectl apply -f Check the new network configuration Finally, all the pods running into your worker nodes should have IP addresses within the secondary CIDR. As you can see in the screenshot below, the service is still working as it should be: Also, notice that the EC2 worker node has now an ENI (Elastic Network Interface) with and IP address within the secondary CIDR: Configure max-pods per node Enabling a custom network removes an available network interface from each node that uses it and the primary network interface for the node is not used for pod placement when a custom network is enabled. In this case, you must update max-pods: To do this step I used Terraform to update the node group increasing the max-pods parameter: Following is the IaC for the EKS cluster: module "eks" { source = "terraform-aws-modules/eks/aws" version = "19.5.1" cluster_name = local.cluster_name cluster_version = "1.24" vpc_id = module.vpc.vpc_id subnet_ids = [module.vpc.private_subnets[0], module.vpc.private_subnets[1]] cluster_endpoint_public_access = true eks_managed_node_group_defaults = { ami_type = "AL2_x86_64" } eks_managed_node_groups = { one = { name = "node-group-1" instance_types = ["t3.small"] min_size = 1 max_size = 1 desired_size = 1 bootstrap_extra_args = "--container-runtime containerd --kubelet-extra-args '--max-pods=110'" pre_bootstrap_user_data = <<-EOT export CONTAINER_RUNTIME="containerd" export USE_MAX_PODS=false EOT } } } In the example I set up max-pods = 110, which is too much for this type of EC2 instance but is a hard limit. This node will allocate as many pods as its resources allow. If you’re interested in learning more about how to Increase the amount of available IP addresses for your nodes, here is an interesting guide from AWS that you can read. Just in case… If for some reason, something goes wrong and your pods aren’t getting IP addresses, you can easily rollback your changes by running the following command and launching new nodes: kubectl set env ds aws-node-n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=false After running the command, the whole cluster will come back using the primary CIDR again. Final Thoughts By diligently following the step-by-step instructions provided in this guide, you have acquired the knowledge to effectively set up and supervise secondary CIDR blocks, tailoring your network to harmonize precisely with the needs of your applications. This newfound adaptability not only simplifies the allocation of resources but also grants you the capability to seamlessly adjust your infrastructure in response to evolving requirements. Ignacio Rubio Cloud Engineer Teracloud If you want to know more about Kubernetes, we suggest going check Conftest: The path to more efficient and effective Kubernetes automated testing If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to add 2FA to our SSH connections with Google Authenticator

    In this Teratip, we will learn how to configure the Google Authenticator PAM module to our SSH (Secure Shell) server connections to add an extra layer of security and protect our systems and data from unauthorized access. The Google Authenticator PAM module is a software component that integrates with the Linux PAM framework to provide two-factor authentication using the Google Authenticator app. It enables users to generate time-based one-time passwords (TOTPs) on their phones, which serve as the second factor for authentication. Two-factor authentication (2FA) is a security measure that requires users to provide two different factors to verify their identity. These factors can include something they know (like a password), something they have (like a mobile device), or something they are (like a fingerprint). Combining the two factors adds an extra layer of security, making it significantly harder for attackers to gain unauthorized access. Even if an attacker manages to obtain or guess the password, they would still need the second factor to authenticate successfully. This helps protect against various types of attacks, such as password cracking or phishing and enhances overall security by requiring dual verification. This is how it’s done: Step #1: Install Google Authenticator app The first step will be installing the Google Authenticator app in our smartphone. It’s available for Android and IOS. Step #2: Install Google Authenticator PAM module Secondly, we’ ll install the Google Authenticator PAM module. On Debian/Ubuntu we can find this module in the repositories: sudo apt install libpam-google-authenticator Step #3: SSH server configuration After the installation of the requirements, we need to configure the ssh server to use the new PAM module. We can do it easily by editing some of the configuration files. On /etc/pam.d/sshd add the following line: auth required pam_google_authenticator.so And on the /etc/ssh/sshd_config change the following option from ‘no’ to ‘yes’: ChallengeResponseAuthentication yes We need to restart the service to get the changes applied in our ssh server with: sudo systemctl restart sshd.service Finally, we just need to launch Google Authenticator command to start the configuration of the 2FA, this is by simply executing: google-authenticator This will trigger a few configuration questions to answer and the first one will generate a QR code, a secret key, and recovery codes. You will need to scan the QR code with the Google Authenticator previously installed on your phone: Do you want authentication tokens to be time-based (y/n) y After scanning the QR code the TOTP will start appearing on the app like this: Step #4: Answer context-specific questions After this you have to answer the other questions based on your particular scenario: Do you want me to update your "/home/facu/.google_authenticator" file? (y/n) y Do you want to disallow multiple uses of the same authentication token? This restricts you to one login about every 30s, but it increases your chances to notice or even prevent man-in-the-middle attacks (y/n) y By default, a new token is generated every 30 seconds by the mobile app. In order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the current time. This allows for a time skew of up to 30 seconds between authentication server and client. If you experience problems with poor time synchronization, you can increase the window from its default size of 3 permitted codes (one previous code, the current code, the next code) to 17 permitted codes (the 8 previous codes, the current code, and the 8 next codes). This will permit for a time skew of up to 4 minutes between client and server. Do you want to do so? (y/n) n If the computer that you are logging into isn't hardened against brute-force login attempts, you can enable rate-limiting for the authentication module. By default, this limits attackers to no more than 3 login attempts every 30s. Do you want to enable rate-limiting? (y/n) y And that's it, with these simple steps we have 2FA configured on the SSH server, and the TOPT will be required apart from your password next time you try to connect: ssh facu@192.168.35.72 Password: Verification code: With this Teratip we have shown how easy it is to implement 2FA and increase security towards our SSH servers without much effort. Final thoughts Incorporating an extra layer of security through 2FA with Google Authenticator into your SSH access is a pivotal step towards fortifying your cloud infrastructure. By following the systematic guide outlined in this blog post, you've empowered yourself to safeguard sensitive data and resources from unauthorized access. With enhanced authentication in place, you're well-equipped to confidently navigate the digital landscape, knowing that your cloud resources remain shielded from potential threats. Facundo Montero Cloud Engineer Teracloud If you want to know more about Security, we suggest going check Prevent (and save money in the process) Security Hub findings related to old ECR images scanned If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • What is Istio Service Mesh? Gain Observability over your infrastructure

    In this TeraTip we’ll go over a brief introduction to Istio Service Mesh by installing it on our cluster and gaining basic visibility of traffic flow. Learn all about Istio Service Mesh for modern microservices applications with the practical examples listed below. If you’re looking to provide powerful features to your Kubernetes cluster, in this post, you’ll learn: Secure service-to-service communication in a cluster with TLS encryption, strong identity-based authentication, and authorization Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection A pluggable policy layer and configuration API supporting access controls, rate limits, and quotas Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress Before you continue reading, make sure you’re familiar with the following terms. Glossary Service Mesh It is a dedicated and configurable infrastructure layer that handles the communication between services without having to change the code in a microservice architecture. Some of the Service Mesh responsibilities include, traffic management, security, observability, health checks, load balancing, etc. Sidecar (imagine a motorcycle sidecar): This is the terminology used to describe the container which is going to run side-by-side with the main container. This sidecar container can perform some tasks to reduce the pressure on the main one. For example, it can perform log shipping, monitoring, file loading, etc. The general use is as a proxy server (TLS, Auth, RETRY) Control Plane: We understand the control plane as the “manager” of the Data Plane, and the Data plane as the one that centralizes the proxy sidecars through the Istio agent. Just as a heads up, since we’re focusing on Istio, we’re going to skip the minikube set up. From this point on, we’ll assume you already have your testing cluster to play around with Istio as well as basic tools such as istioctl. Ok, now that we’ve got those covered, let's get our hands dirty! What is Istio? Istio is an open source service mesh that layers transparently onto existing distributed applications. Istio’s powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio is the path to load balancing, service-to-service authentication, and monitoring – with few or no service code changes. Integrate Istio to a cluster Alrighty, first thing first. Let's get Istio on our cluster. There are three options for us to integrate Istio: Install it via Istioctl (istioctl install --set profile=demo -y) Install it via Istio Operator Install Install it via Helm The previous step will install the core components (istio ingress gateway, istiod, istio egress gateway). Run istioctl verify-install if you are not sure of what you just installed into your cluster. You should see something like this: Now, to follow up with this demo we recommend you make use of the Istio samples directory where you will find demo apps to play around with. Label your namespace to inject sidecar pods Time to get our namespace labeled, that's the way Istio knows where to inject the sidecar pods. Run 'kubectl label namespace default istio-injection=enabled' to enable it, or 'kubectl label namespace default istio-injection=disabled' to explicitly mark it as not needing injection. Now run istioctl analyze And, this is the expected output: Time to deploy some resources. Execute kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml The previous command will create the following resources. See the screenshot below Make sure everything is up and running before continuing, execute kubectl get pods -A to verify. And… voila! There we have two containers per pod. Note that the Kubernetes Gateway API CRDs do not come installed by default on most Kubernetes clusters, so make sure they are installed before using the Gateway API: kubectl get crd [gateways.gateway.networking.k8s.io]() &> /dev/null || \\ { kubectl kustomize "[github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.6.1]()" | kubectl apply -f -; } If using Minikube, remember to open a tunnel! minikube tunnel Its gateway time: kubectl apply -f [samples/bookinfo/networking/bookinfo-gateway.yaml]() Visualize your service mesh with Kiali Okey-dokey, now it's time for some service mesh visualization, we are going to use Kiali. Execute the following kubectl apply -f samples/addons The previous command will create some cool stuff listed below: kubectl rollout status deployment/kiali -n istio-system Check it out with kubectl -n istio-system get svc kiali Everything look good? Cool. Now it's time to navigate through the dashboard, execute istioctl dashboard kiali , and go to your browser. If you’re testing this on a non-productive (meaning, without traffic) site then its going to look empty and boring since we don't have any traffic flowing. Check your ip with minikube ip And execute the following exports: export INGRESS_HOST=$(minikube ip) export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') export TCP_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].nodePort}') Awesome, now we can curl our app and see what happens curl "http://$INGRESS_HOST:$INGRESS_PORT/productpage”, fair enough, but lets get some more traffic with a while loop as follows: while sleep 0.01;do curl -sS 'http://'"$INGRESS_HOST"':'"$INGRESS_PORT"'/productpage'\\ &> /dev/null ; done Alright, now‌ at the screenshot below Kiali provides us with a useful set of visual tools to better understand our workload traffic. On the second screenshot we can see the power of Kiali; the white dots on top of the green lines represent the traffic (even though it's a static image, picture those dots moving in different directions and speeds!). In summary, Istio provides us with a powerful set of tools. On this TeraTip we saw a brief introduction to Istio Service Mesh. We focused our attention on installing it on our cluster and on gaining the visualization of some basic traffic flows. Stay tuned for more! References https://istio.io/latest/docs/ https://istio.io/latest/docs/examples/bookinfo/ Tomás Torales Cloud Engineer Teracloud If you want to know more about Kubernetes, we suggest going check Enhance your Kubernetes security by leveraging KubeSec If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to use Kubecost in an EKS Cluster

    Kubecost is an efficient and powerful tool that allows you to manage costs and resource allocation in your Kubernetes cluster. It provides a detailed view of the resources used by your applications and helps optimize resource usage, which can ultimately reduce cloud costs. In this document, we’ll guide you through the necessary steps to use Kubecost in your Kubernetes cluster. Let’s dive in. Deploy Kubecost in Amazon EKS Step #1: Install Kubecost on your Amazon EKS cluster Step #2: Generate Kubecost dashboard endpoint Step #3: Access cost monitoring dashboard Overview of available metrics Final thoughts Deploy Kubecost in Amazon EKS To get started, follow these steps to deploy Kubecost into your Amazon EKS cluster in just a few minutes using Helm. Install the following tools: Helm 3.9+, kubectl, and optionally eksctl and awscli. You have access to an Amazon EKS cluster. To deploy one, see Getting started with Amazon EKS. If your cluster is running Kubernetes version 1.23 or later, you must have the Amazon EBS CSI driver installed on your cluster. Step# 1: Install Kubecost on your Amazon EKS cluster. In your environment, run the following command from your terminal to install Kubecost on your existing Amazon EKS cluster. helm upgrade -i kubecost \ oci://public.ecr.aws/kubecost/cost-analyzer --version 1.99.0 \ --namespace kubecost --create-namespace \ -f https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/develop/cost-analyzer/values-eks-cost-monitoring.yaml Note: You can find all available versions of the EKS-optimized Kubecost bundle here. We recommend finding and installing the latest available Kubecost cost analyzer chart version. By default, the installation includes certain prerequisite software including Prometheus and kube-state-metrics. To customize your deployment (e.g., skipping these prerequisites if you already have them running in your cluster), you can find a list of available configuration options in the Helm configuration file. Step #2: Generate Kubecost dashboard endpoint. After you install Kubecost using the Helm command in step 2, it should take under two minutes to be completed. You can run the following command to enable port-forwarding to expose the Kubecost dashboard: kubectl port-forward --namespace kubecost deployment/kubecost-cost-analyzer 9090 Step #3: Access cost monitoring dashboard. On your web browser, navigate to http://localhost:9090 to access the dashboard. You can now start tracking your Amazon EKS cluster cost and efficiency. Depending on your organization’s requirements and set up, there are several options to expose Kubecost for on-going internal access. There are a few examples that you can use for your references: Check out the Kubecost documentation for Ingress Examples as a reference for using Nginx ingress controller with basic auth. Consider using the AWS Load Balancer Controller to expose Kubecost and use Amazon Cognito for authentication, authorization, and user management. You can learn more this How to use Application Load Balancer and Amazon Cognito to authenticate users for your Kubernetes web apps -Overview of available metrics The following are examples of the metrics available within the Kubecost dashboard. Use Kubecost to quickly see an overview of Amazon EKS spend, including cumulative cluster costs, associated Kubernetes asset costs, and monthly aggregated spend. -Cost allocation by namespace View monthly Amazon EKS costs as well as cumulative costs per namespace and other dimensions up to the last 15 days. This enables you to better understand which parts of your application are contributing to Amazon EKS spend. -Spend and usage for other AWS Services associated with Amazon EKS clusters View the costs of AWS infrastructure assets that are associated with their EKS resources. -Export Cost Metrics At a high level, Amazon EKS cost monitoring is deployed with Kubecost, which includes Prometheus, an open-source monitoring system and time series database. Kubecost reads metrics from Prometheus, then performs cost allocation calculations, and writes the metrics back to Prometheus. Finally, the Kubecost front end reads metrics from Prometheus and shows them on the Kubecost user interface (UI). The architecture is illustrated by the following diagram: -Kubecost reading metrics With this pre-installed Prometheus, you can also write queries to ingest Kubecost data in your current business intelligence system for further analysis. You can also use it as a datasource for your current Grafana dashboard to display Amazon EKS cluster cost that your internal teams are familiar with. To learn more about how to write Prometheus queries, review Kubecost’s documentation or use example Grafana JSON models in the Kubecost Github repository as references. -AWS Cost and Usage Report (AWS CUR) integration To perform cost allocation calculations for your Amazon EKS cluster, Kubecost retrieves the public pricing information of AWS services and resources from AWS Price List API. You can also integrate Kubecost with the AWS CUR to enhance the accuracy of pricing information that is specific to your AWS account (e.g., Enterprise Discount Programs, Reserved Instance usage, Savings Plans, and Spot usage). You can learn more on how the AWS CUR integration works at AWS Cloud Integration. -Cleanup You can uninstall Kubecost from your cluster with the following command. helm uninstall kubecost --namespace kubecost Final thoughts Implementing Kubecost in your Amazon EKS cluster can significantly enhance your cost management and resource optimization efforts. By providing a comprehensive view of resource usage and associated costs, Kubecost empowers you to make informed decisions on optimizing resource allocation, which can lead to reduced cloud costs. Its easy deployment process using Helm makes it accessible to users with various levels of expertise. Additionally, Kubecost's integration with Prometheus enables you to leverage your existing business intelligence systems and Grafana dashboards for further analysis and visualization. Overall, Kubecost proves to be an invaluable tool for cost-conscious organizations seeking to maximize their Amazon EKS cluster's efficiency while keeping cloud expenditures in check. Give Kubecost a try today and take control of your Kubernetes cost management with ease. Martín Carletti Cloud Engineer Teracloud If you want to know more about Kubernetes, we suggest going check Conftest: The path to more efficient and effective Kubernetes automated testing If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to gain control over your Pull Request in Azure DevOps in 5 steps

    By leveraging Azure Functions, webhooks, and pull request configurations, you can efficiently validate branches across multiple repositories without the need for separate pipelines. Let’s learn how. Azure DevOps is a powerful cloud-based platform that offers a wide range of development tools and services. Nevertheless, when it comes to running pipelines across multiple repositories, Azure DevOps has certain limitations that make it cumbersome to perform build validations on specific branches existing in multiple repositories. But fear not! We have a solution that will save you time and effort. In this guide, we'll take you through the steps to set up this solution. You'll learn: How to obtain an authentication token Prepare the Azure Function code Configure webhooks. Set up the pull request protection policy. Following these steps will streamline your build validation process and make your Azure DevOps workflows a breeze. What you’ll need Here you have the magic ingredients: 1 Azure DevOps Account. 1 Webhook. 1 Azure Function. 1 Token. To achieve our goal successfully in this lab, we’ll follow a series of steps. Step #1: Create a token First of all, we must create a Token to be used in an Azure Function. This function will be triggered whenever a Pull Request is created, thanks to the webhook that connects Azure Function and Azure DevOps. (Don't worry, it's much simpler than it sounds) Let's get started by following these instructions: Log in to your Azure DevOps account. Navigate to your profile settings by clicking on your profile picture or initials in the top-right corner of the screen. From the dropdown menu, select "Security". In the "Personal access tokens" section, click on "New Token". Provide a name for your token to identify its purpose. Choose the desired organization and set the expiration date for the token. Under "Scope", select the appropriate level of access needed for your token. For example, if you only need to perform actions related to build and release pipelines, choose the relevant options. Review and confirm the settings. Once the token is created, make sure to copy and securely store it. Note that you won't be able to view the token again after leaving the page. So be careful! You can now use this token in your Azure Function or other applications to authenticate and access Azure DevOps resources. Step #2: Prepare the Azure Function Click on the "Create a resource" button (+) in the top-left corner of the portal. In the search bar, type "Function App" and select "Function App" from the results. Click on the "Create" button to start the creation process. In the "Basics" tab, provide the necessary details: Subscription: Select your desired subscription. Resource Group: Select a name for the Resource Group. Function App name: Enter a unique name for your function app. Runtime stack: Choose .NET Region: Select the region closest to your target audience. Click on the "Next" button to proceed to the "Hosting" tab. Configure the hosting settings: Operating System: Windows Plan type: Select the appropriate plan type (Consumption, Premium, or Dedicated). Storage account: Create a new storage account or select an existing one. Click on the "Review + Create" button to proceed. Review the summary of your configuration, and if everything looks good, click on the "Create" button to create the Azure Function. The deployment process may take a few minutes. Once it's completed, you'll see a notification indicating that the deployment was successful. Navigate to the newly created and change the code for the following one (.NET code): using System; using System.Net; using System.Net.Http; using System.Net.Http.Headers; using System.Text; using Newtonsoft.Json; #Add your PAT (Token) private static string pat = ""; public static async Task Run(HttpRequestMessage req, TraceWriter log) { try { log.Info("Service Hook Received."); // Get request body dynamic data = await req.Content.ReadAsAsync

bottom of page