Blog | Teracloud
top of page

149 items found for ""

  • How to apply for Amazon's Service Delivery Program (SDP)

    Amazon's Service Delivery Program (SDP) presents an exciting opportunity for service providers looking to work with one of the world's most influential tech giants. By joining the SDP, companies can establish strong relationships with Amazon Web Services (AWS) and access a global audience. However, the competition is fierce, and preparation is key to standing out in the application process. In this guide, we will explore essential tips and considerations for successfully applying to Amazon's SDP. 1. Understand the Program Requirements Before you embark on your journey to apply for Amazon's Service Delivery Program (SDP), it's crucial to have a comprehensive understanding of the program's requirements. These requirements serve as the foundation for your application, ensuring that you align with Amazon's expectations and can provide the level of service they seek. Here's an expanded breakdown of what this entails: Technical Expertise: Amazon's SDP is geared towards service providers who possess a deep understanding of Amazon Web Services (AWS). This means you should have a proven track record of working with AWS technologies, deploying solutions, and managing AWS resources effectively. Your technical expertise should extend to various AWS services and use cases. Certifications: AWS certifications are a testament to your knowledge and proficiency in AWS. Depending on the specific services you plan to deliver as part of the SDP, having relevant certifications can significantly bolster your application. Certifications demonstrate your commitment to continuous learning and your ability to stay updated with the latest AWS developments. Referenceable Clients: References from satisfied clients can be a powerful asset in your application. These references should be able to vouch for your capabilities, professionalism, and the positive impact your services have had on their AWS environments. Having a diverse range of referenceable clients from various industries can demonstrate your versatility and ability to adapt to different contexts. Business Practices: Amazon values partners who uphold high standards of business ethics and professionalism. Your company's business practices, including responsiveness, communication, and customer-centric approaches, should align with Amazon's values. A strong reputation in the industry for integrity and reliability can enhance your application's credibility. AWS Partnership Tier: Depending on the tier of partnership you aim to achieve within the SDP, there might be specific requirements to fulfill. Higher partnership tiers often require a deeper level of engagement with AWS, which could include meeting revenue targets, demonstrating a significant number of successful customer engagements, and showing a commitment to driving AWS adoption. 2. Demonstrate AWS Expertise As you navigate the application process for Amazon's Service Delivery Program (SDP), highlighting your expertise in Amazon Web Services (AWS) is a fundamental aspect that can set your application apart. Demonstrating your in-depth understanding of AWS technologies and your ability to leverage them effectively is key. Here's a comprehensive exploration of how to effectively showcase your AWS expertise: Project Portfolio: Provide a detailed portfolio of projects that showcases your hands-on experience with AWS. Highlight a variety of projects that demonstrate your proficiency across different AWS services, such as computing, storage, networking, security, and databases. Include project descriptions, the challenges you addressed, the solutions you implemented, and the outcomes achieved. Architectural Excellence: Describe how you've designed AWS architectures to meet specific business needs. Explain the decision-making process behind architecture choices, scalability considerations, fault tolerance measures, and security implementations. Highlight instances where your architectural decisions led to optimized performance and cost savings. Use Cases: Illustrate your familiarity with a range of AWS use cases. Detail scenarios where you've successfully deployed AWS solutions for tasks like application hosting, data analytics, machine learning, Internet of Things (IoT), and serverless computing. Showcase your ability to tailor AWS services to diverse client requirements. Problem Solving: Provide examples of how you've troubleshooted and resolved complex issues within AWS environments. Discuss instances where you identified bottlenecks, optimized performance, or resolved security vulnerabilities. This demonstrates your ability to handle real-world challenges that can arise during service delivery. AWS Best Practices: Emphasize your adherence to AWS best practices in terms of security, compliance, performance optimization, and cost management. Discuss how you've implemented well-architected frameworks and followed AWS guidelines to ensure the reliability and scalability of your solutions. 3. Focus on Innovation and Quality Amazon seeks partners who not only meet standards but also bring innovation and quality to the table. In your application, showcase how your company adds unique value through innovative approaches and how you ensure quality in every service you offer. Continuous Improvement: Highlight your commitment to continuous improvement in your services. Describe how you actively seek feedback from clients and incorporate their input to refine and enhance your solutions. Emphasize your agility in adapting to changing client needs and industry trends. Metrics of Success: Quantify the success of your innovative solutions with relevant metrics. If your solution improved performance, reduced costs, or increased revenue for your clients, provide specific numbers and percentages to highlight the tangible impact of your work. Quality Assurance: Describe your quality assurance processes and methodologies. Explain how you ensure that your solutions meet the highest standards in terms of functionality, security, and performance. Highlight any certifications, industry standards, or best practices you adhere to. Collaboration with Clients: Showcase instances where you collaborated closely with clients to co-create innovative solutions. Discuss how you facilitated workshops, brainstorming sessions, and prototyping activities to bring their ideas to life while adding your expertise. 4. Prepare Strong References Solid references from past clients are a vital component of your application. Select references that can vouch for your capabilities and achievements in delivering AWS services. Make sure you have authentic testimonials that highlight your professionalism and skills. 5. Articulate Your Value Proposition Clearly explain why your company is the right choice for the SDP. What makes your approach unique? How will your collaboration benefit Amazon and AWS customers? Articulate your value proposition concisely and convincingly. 6. Preparation and Detailed Review Thorough preparation and meticulous review are crucial steps in the application process for Amazon's Service Delivery Program (SDP). Any grammatical errors or inaccuracies in your application could impact the impression you make on Amazon's evaluators. Here's a detailed exploration of how to approach these aspects: Organized Structure: Organize your application coherently and clearly. Divide your content into distinct sections such as experience, value proposition, project examples, and references. Use headers and bullet points to enhance readability and highlight key points. Relevant Content: Ensure that each section of your application is relevant to the requirements of the SDP. Avoid including redundant information or content that does not directly contribute to demonstrating your experience and capability to deliver quality services on AWS. Accurate Information: Verify that all provided information is accurate and up-to-date. Including incorrect or outdated information can affect the credibility of your application. Exemplary Stories: In the past experience section, choose project stories that exemplify your achievements and capabilities. Provide specific details about challenges you faced, how you overcame them, and the tangible results you achieved. Professional Language: Maintain a professional and clear tone throughout your application. Avoid unnecessary jargon or overly technical language that might hinder understanding for evaluators who may not be experts in all technical areas. Reflection and Context: Don't just list achievements, but also provide context and reflection on your experience. Explain why certain projects were challenging or why you chose specific approaches. This demonstrates your ability to think critically and learn from experiences. Grammatical Review: Carefully review your application for grammatical and spelling errors. A professionally written and well-edited application showcases your attention to detail and seriousness. Consistent Formatting: Maintain consistent formatting throughout the application. Use the same font, font size, and formatting style throughout the document to create a coherent and professional presentation. External Feedback: Consider asking colleagues or mentors to review your application. Often, an extra set of eyes can identify areas for improvement that you might have overlooked. Deadlines and Submission: Ensure you meet the deadlines set by Amazon and submit your application according to the provided instructions. Applying for Amazon's SDP is an exciting opportunity but requires careful planning and preparation. By following these tips and considerations, your application will be well on its way to standing out among competitors and establishing a strong partnership with Amazon Web Services. Remember that authenticity, AWS expertise, and a clear value proposition are key elements to impressing in the selection process. Best of luck in your application to Amazon's SDP! For more info: https://aws.amazon.com/partners/programs/service-delivery/?nc1=h_ls Julian Catellani Cloud Engineer Teracloud If you are interested in learning more about our TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.

  • Secure Your Data with SOC 2 Compliant Solutions

    In today's digital landscape, where data breaches and cyber threats have become increasingly sophisticated, protecting sensitive information is of paramount importance. One effective approach that organizations are adopting to ensure the security of their data is by implementing SOC 2-compliant solutions. In this article, we'll delve into what SOC 2 compliance entails, its significance for safeguarding data, and how businesses can benefit from adopting such solutions. Table of Contents Understanding SOC 2 Compliance Key Components of SOC 2 Compliance Who Needs SOC 2 Compliance? In an era where data breaches can lead to devastating financial and reputational losses, companies must adopt robust strategies to safeguard their sensitive information. SOC 2 compliance offers a comprehensive framework that helps organizations fortify their data security measures. By adhering to the SOC 2 standards, companies can not only protect themselves from potential cyber threats but also gain a competitive edge in the market. Understanding SOC 2 Compliance What is SOC 2? SOC 2, or Service Organization Control 2, is a set of stringent compliance standards developed by the American Institute of CPAs (AICPA). It focuses on the controls and processes that service providers implement to ensure the security, availability, processing integrity, confidentiality, and privacy of customer data. Unlike SOC 1, which assesses financial controls, SOC 2 is geared towards evaluating the effectiveness of a company's non-financial operational controls. Why is SOC 2 Compliance Important? SOC 2 compliance is crucial because it reassures customers, partners, and stakeholders that a company has established rigorous security measures to protect sensitive data. As data breaches continue to make headlines, consumers are becoming more cautious about sharing their information with businesses. SOC 2 compliance demonstrates a commitment to data protection, enhancing trust and credibility. Key Components of SOC 2 Compliance Security Security is a foundational component of SOC 2 compliance. It involves implementing safeguards to protect against unauthorized access, data breaches, and other security threats. This includes measures such as multi-factor authentication, encryption, and intrusion detection systems. Availability Businesses must ensure that their services are available and operational when needed. SOC 2 compliance assesses the measures in place to prevent and mitigate service interruptions, ranging from robust infrastructure to disaster recovery plans. Processing Integrity Processing integrity focuses on the accuracy and completeness of data processing. Companies must have controls in place to ensure that data is processed correctly, preventing errors and unauthorized modifications. Confidentiality Confidentiality revolves around protecting sensitive information from unauthorized disclosure. This includes customer data, intellectual property, and other confidential information. Privacy Privacy controls are vital for businesses that handle personally identifiable information (PII). SOC 2 compliance evaluates whether a company's practices align with relevant data privacy regulations. Who Needs SOC 2 Compliance? SaaS Companies Software-as-a-Service (SaaS) companies often handle a vast amount of customer data. Achieving SOC 2 compliance is essential for building trust and attracting clients concerned about the security of their data. Cloud Service Providers Cloud service providers store and process data for various clients. SOC 2 compliance demonstrates their commitment to ensuring the security, availability, and privacy of customer data. Data-Centric Businesses Companies that rely heavily on data, such as e-commerce platforms or healthcare providers, need SOC 2 compliance to protect customer information and maintain legal requirements. Stay tuned for the rest of the article, where we will delve deeper into achieving SOC 2 compliance, its benefits, and its challenges, as well as a comparison with other compliance frameworks. Paulo Srulevitch Content Creator Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to integrate Prometheus in an EKS Cluster as a Data Source in AWS Managed Grafana

    Whether you're an experienced DevOps engineer or just starting your cloud journey, this article will equip you with the knowledge and tools needed to effectively monitor and optimize your EKS environment. Objective Configure and use Prometheus to collect metrics on an Amazon EKS cluster and view those metrics in AWS Managed Grafana (AMG). Provide usage instructions and an estimate of the costs of connecting Prometheus metrics as an AMG data source. Let’s assume that fluentbit is already configured on the EKS cluster. Step #1: Prometheus Configuration Ensure Prometheus is installed and running in your Amazon EKS cluster. You can install via Terraform using helm chart. Verify that Prometheus is successfully collecting metrics from your cluster nodes and applications. Step #2: Configure the data source in Grafana Now you’ll need to configure the data source in Grafana. (Here the created LoadBalancer will serve as a reference) Make sure the AWS Route53 console is open, and a private Hosted Zone named "monitoring.domainname" is created. Inside this Hosted Zone, create an Alias record pointing to the LoadBalancer previously mentioned. This data will be used to configure the Prometheus service as the data source in AMG. AWS Managed Grafana Configuration Provision an instance of AWS Managed Grafana. Access the AWS Managed Grafana console and obtain the URL to access the Grafana instance. Ensure you have the necessary permissions to manage data sources in AWS Managed Grafana. Configure Prometheus as a Data Source in AWS Managed Grafana: Access the AWS Managed Grafana console using the URL obtained in step 2. Navigate to the "Configuration" section and select "Data sources". Click on "Add data source" and choose "Prometheus" as the data source type. Complete the required fields, including the Prometheus endpoint URL and authentication credentials if applicable, or a Workspace IAM Role. Save the data source configuration. Visualizing Metrics in Grafana: Identify the KPIs needed to visualize in the dashboard Create dashboards in Grafana to visualize the metrics collected by Prometheus. Utilize Grafana's query and visualization options to create customized visualizations of your metrics. Explore different panel types such as graphs, tables, and text panels to present the information in a clear and understandable manner. Step #3: Estimate costs To estimate the costs associated with integrating Prometheus as a data source in AWS Managed Grafana, consider the following: AWS Managed Grafana Costs: Refer to the AWS documentation to understand the details and pricing associated with AWS Managed Grafana. According to the following documentation, the price is per license, either editor or user. The editor can create and edit both the workspace and the metric display and the user can only view the panels and metrics previously configured by the editor. (https://aws.amazon.com/es/grafana/pricing/). Today, the editor license cost $9 and the user license cost $5 Storage Costs: If AWS Managed Grafana utilizes additional storage to store metrics collected by Prometheus, refer to the AWS documentation for information on pricing and available storage options. Remember that costs may vary depending on your specific configuration and the AWS region where your AWS Managed Grafana instance is located. Consult the documentation and updated pricing details for accurate cost estimation. Final thoughts In conclusion, this is a very interesting and easy-to-implement alternative to replace solutions in clusters that have a large number of running pods. This scenario generates an even larger number of metrics, and that's where this processing-based and licensed solution becomes much more cost-effective compared to a metric-based pricing model. Martín Carletti Cloud Engineer Teracloud If you want to know more about EKS, we suggest going check Cross account access to S3 using IRSA in EKS with Terraform as IaaC If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to use a secondary CIDR in EKS

    In this Teratip, we’ll deep dive into the realm of secondary CIDR blocks in AWS EKS and explore how they can empower you to enhance your cluster's flexibility and scalability. We’ll uncover the benefits of leveraging secondary CIDR blocks and walk you through the process of configuring them. Introduction As you expand your applications and services, you may face scenarios where the primary CIDR block assigned to your EKS cluster becomes insufficient. Perhaps you're introducing additional microservices, deploying multiple VPC peering connections, or integrating with legacy systems that have their own IP address requirements. These situations call for a solution that allows you to allocate more IP addresses to your cluster without sacrificing stability or network performance. Secondary CIDR blocks provide an elegant solution by enabling you to attach additional IP address ranges to your existing VPC, thereby expanding the available address space for your EKS cluster. Throughout this post, we’ll go over the step-by-step process of adding secondary CIDR blocks to your AWS EKS cluster. Create the EKS cluster For this demonstration, I created a simple EKS cluster with only one node and deployed the famous game 2048 which can be accessed through an Internet-face Application Load Balancer. So, the EKS cluster and workload looks like this: And this is the VPC where the cluster is located: As you can see, this VPC has the 10.0.0.0/16 IPv4 CIDR block assigned. Keeping this in consideration, all pods into the cluster will get an IP address within this range: Next, we will configure this same cluster to use a secondary CIDR block in the same VPC. In this way, almost all pods will get IP addresses from the new CIDR. Step by step process Step #1: Create the secondary CIDR within our VPC RESTRICTION: EKS supports additional IPv4 CIDR blocks in the 100.64.0.0/16 range. It’s possible to add a second CIDR to the current VPC, and of course create subnets within this VPC using the new range. I did it through Terraform but this can be done using the AWS Console as well. The code that I used to create the VPC is the following: module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "3.19.0" name = "teratip-eks-2cidr-vpc" cidr = "10.0.0.0/16" secondary_cidr_blocks = ["100.64.0.0/16"] azs = slice(data.aws_availability_zones.available.names, 0, 2) private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "100.64.1.0/24", "100.64.2.0/24"] public_subnets = ["10.0.4.0/24", "10.0.5.0/24"] enable_nat_gateway = true single_nat_gateway = true enable_dns_hostnames = true public_subnet_tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" "kubernetes.io/role/elb" = 1 } private_subnet_tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" "kubernetes.io/role/internal-elb" = 1 } } Take a look to the line secondary_cidr_blocks = ["100.64.0.0/16"] and the private subnets (the last two) created into this CIDR: private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "100.64.1.0/24", "100.64.2.0/24"] The obtained VPC looks like this: Step #2: Configure the CNI DEFINITION: CNI (Container Network Interface), concerns itself with network connectivity of containers and removing allocated resources when the container is deleted. In order to use a secondary CIDR into the cluster it’s needed to configure some environment variables in the CNI daemonset configuration, by running the following commands: 1. To turn on custom network configuration for the CNI plugin, run the following command: kubectl set env ds aws-node-n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true 2. To add the ENIConfig label for identifying your worker nodes, run the following command: kubectl set env daemonset aws-node-n kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone 3. Enable the parameter to assign prefixes to network interfaces for the Amazon VPC CNI DaemonSet. kubectl set env daemonset aws-node-n kube-system ENABLE_PREFIX_DELEGATION=true Terminate worker nodes so that Autoscaling launches newer nodes that come bootstrapped with custom network config. Step #3: Create the ENIconfig resources for the new subnets As next step, we will add custom resources to ENIConfig custom resource definition (CRD). In this case, we will store VPC Subnet and SecurityGroup configuration information in these CRDs so that Worker nodes can access them to configure VPC CNI plugin. Create custom resources for each subnet by replacing Subnet and SecurityGroup IDs. Since we created two secondary subnets, we need to create three custom resources. --- apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: us-east-2a spec: securityGroups: -sg-087d0a0ece9800b00 subnet: subnet-0fabe93c6f43f492b --- apiVersion: crd.k8s.amazonaws.com/v1alpha1 kind: ENIConfig metadata: name: us-east-2b spec: securityGroups: -sg-087d0a0ece9800b00 subnet: subnet-0484194486fad2ce3 Note: The ENIConfig name must match the Availability Zone of your subnets. You can get the Cluster security group included in the ENIConfigs in the EKS Console or by running the following command: aws eks describe-cluster --name $cluster_name --querycluster.resourcesVpcConfig.clusterSecurityGroupId --output text Once ENIConfigs YAML is created apply them: kubectl apply -f Check the new network configuration Finally, all the pods running into your worker nodes should have IP addresses within the secondary CIDR. As you can see in the screenshot below, the service is still working as it should be: Also, notice that the EC2 worker node has now an ENI (Elastic Network Interface) with and IP address within the secondary CIDR: Configure max-pods per node Enabling a custom network removes an available network interface from each node that uses it and the primary network interface for the node is not used for pod placement when a custom network is enabled. In this case, you must update max-pods: To do this step I used Terraform to update the node group increasing the max-pods parameter: Following is the IaC for the EKS cluster: module "eks" { source = "terraform-aws-modules/eks/aws" version = "19.5.1" cluster_name = local.cluster_name cluster_version = "1.24" vpc_id = module.vpc.vpc_id subnet_ids = [module.vpc.private_subnets[0], module.vpc.private_subnets[1]] cluster_endpoint_public_access = true eks_managed_node_group_defaults = { ami_type = "AL2_x86_64" } eks_managed_node_groups = { one = { name = "node-group-1" instance_types = ["t3.small"] min_size = 1 max_size = 1 desired_size = 1 bootstrap_extra_args = "--container-runtime containerd --kubelet-extra-args '--max-pods=110'" pre_bootstrap_user_data = <<-EOT export CONTAINER_RUNTIME="containerd" export USE_MAX_PODS=false EOT } } } In the example I set up max-pods = 110, which is too much for this type of EC2 instance but is a hard limit. This node will allocate as many pods as its resources allow. If you’re interested in learning more about how to Increase the amount of available IP addresses for your nodes, here is an interesting guide from AWS that you can read. Just in case… If for some reason, something goes wrong and your pods aren’t getting IP addresses, you can easily rollback your changes by running the following command and launching new nodes: kubectl set env ds aws-node-n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=false After running the command, the whole cluster will come back using the primary CIDR again. Final Thoughts By diligently following the step-by-step instructions provided in this guide, you have acquired the knowledge to effectively set up and supervise secondary CIDR blocks, tailoring your network to harmonize precisely with the needs of your applications. This newfound adaptability not only simplifies the allocation of resources but also grants you the capability to seamlessly adjust your infrastructure in response to evolving requirements. Ignacio Rubio Cloud Engineer Teracloud If you want to know more about Kubernetes, we suggest going check Conftest: The path to more efficient and effective Kubernetes automated testing If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to add 2FA to our SSH connections with Google Authenticator

    In this Teratip, we will learn how to configure the Google Authenticator PAM module to our SSH (Secure Shell) server connections to add an extra layer of security and protect our systems and data from unauthorized access. The Google Authenticator PAM module is a software component that integrates with the Linux PAM framework to provide two-factor authentication using the Google Authenticator app. It enables users to generate time-based one-time passwords (TOTPs) on their phones, which serve as the second factor for authentication. Two-factor authentication (2FA) is a security measure that requires users to provide two different factors to verify their identity. These factors can include something they know (like a password), something they have (like a mobile device), or something they are (like a fingerprint). Combining the two factors adds an extra layer of security, making it significantly harder for attackers to gain unauthorized access. Even if an attacker manages to obtain or guess the password, they would still need the second factor to authenticate successfully. This helps protect against various types of attacks, such as password cracking or phishing, and enhances overall security by requiring dual verification. This is how it’s done: Step # 1: Install Google Authenticator app The first step will be installing the Google Authenticator app in our smartphone. It’s available for Android and IOS. Step # 2: Install Google Authenticator PAM module Secondly, we’ ll install the Google Authenticator PAM module. On Debian/Ubuntu we can find this module in the repositories: sudo apt install libpam-google-authenticator Step # 3: SSH server configuration After the installation of the requirements, we need to configure the ssh server to use the new PAM module. We can do it easily by editing some of the configuration files. On /etc/pam.d/sshd add the following line: auth required pam_google_authenticator.so And on the /etc/ssh/sshd_config change the following option from ‘no’ to ‘yes’: ChallengeResponseAuthentication yes We need to restart the service to get the changes applied in our ssh server with: sudo systemctl restart sshd.service Finally, we just need to launch Google Authenticator command to start the configuration of the 2FA, this is by simply executing: google-authenticator This will trigger a few configuration questions to answer and the first one will generate a QR code, a secret key, and recovery codes. You will need to scan the QR code with the Google Authenticator previously installed on your phone: Do you want authentication tokens to be time-based (y/n) y After scanning the QR code the TOTP will start appearing on the app like this: Step # 4: Answer context-specific questions After this you have to answer the other questions based on your particular scenario: Do you want me to update your "/home/facu/.google_authenticator" file? (y/n) y Do you want to disallow multiple uses of the same authentication token? This restricts you to one login about every 30s, but it increases your chances to notice or even prevent man-in-the-middle attacks (y/n) y By default, a new token is generated every 30 seconds by the mobile app. In order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the current time. This allows for a time skew of up to 30 seconds between authentication server and client. If you experience problems with poor time synchronization, you can increase the window from its default size of 3 permitted codes (one previous code, the current code, the next code) to 17 permitted codes (the 8 previous codes, the current code, and the 8 next codes). This will permit for a time skew of up to 4 minutes between client and server. Do you want to do so? (y/n) n If the computer that you are logging into isn't hardened against brute-force login attempts, you can enable rate-limiting for the authentication module. By default, this limits attackers to no more than 3 login attempts every 30s. Do you want to enable rate-limiting? (y/n) y And that's it, with these simple steps we have 2FA configured on the SSH server, and the TOPT will be required apart from your password next time you try to connect: ssh facu@192.168.35.72 Password: Verification code: With this Teratip we have shown how easy it is to implement 2FA and increase security towards our SSH servers without much effort. Final thoughts Incorporating an extra layer of security through 2FA with Google Authenticator into your SSH access is a pivotal step towards fortifying your cloud infrastructure. By following the systematic guide outlined in this blog post, you've empowered yourself to safeguard sensitive data and resources from unauthorized access. With enhanced authentication in place, you're well-equipped to confidently navigate the digital landscape, knowing that your cloud resources remain shielded from potential threats. Facundo Montero Cloud Engineer Teracloud If you want to know more about Security, we suggest checking Prevent (and save money in the process) Security Hub findings related to old ECR images scanned If you are interested in learning more about our TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.

  • What is Istio Service Mesh? Gain Observability over your infrastructure

    In this TeraTip we’ll go over a brief introduction to Istio Service Mesh by installing it on our cluster and gaining basic visibility of traffic flow. Learn all about Istio Service Mesh for modern microservices applications with the practical examples listed below. If you’re looking to provide powerful features to your Kubernetes cluster, in this post, you’ll learn: Secure service-to-service communication in a cluster with TLS encryption, strong identity-based authentication, and authorization Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection A pluggable policy layer and configuration API supporting access controls, rate limits, and quotas Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress Before you continue reading, make sure you’re familiar with the following terms. Glossary Service Mesh It is a dedicated and configurable infrastructure layer that handles the communication between services without having to change the code in a microservice architecture. Some of the Service Mesh responsibilities include, traffic management, security, observability, health checks, load balancing, etc. Sidecar (imagine a motorcycle sidecar): This is the terminology used to describe the container which is going to run side-by-side with the main container. This sidecar container can perform some tasks to reduce the pressure on the main one. For example, it can perform log shipping, monitoring, file loading, etc. The general use is as a proxy server (TLS, Auth, RETRY) Control Plane: We understand the control plane as the “manager” of the Data Plane, and the Data plane as the one that centralizes the proxy sidecars through the Istio agent. Just as a heads up, since we’re focusing on Istio, we’re going to skip the minikube set up. From this point on, we’ll assume you already have your testing cluster to play around with Istio as well as basic tools such as istioctl. Ok, now that we’ve got those covered, let's get our hands dirty! What is Istio? Istio is an open source service mesh that layers transparently onto existing distributed applications. Istio’s powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio is the path to load balancing, service-to-service authentication, and monitoring – with few or no service code changes. Integrate Istio to a cluster Alrighty, first thing first. Let's get Istio on our cluster. There are three options for us to integrate Istio: Install it via Istioctl (istioctl install --set profile=demo -y) Install it via Istio Operator Install Install it via Helm The previous step will install the core components (istio ingress gateway, istiod, istio egress gateway). Run istioctl verify-install if you are not sure of what you just installed into your cluster. You should see something like this: Now, to follow up with this demo we recommend you make use of the Istio samples directory where you will find demo apps to play around with. Label your namespace to inject sidecar pods Time to get our namespace labeled, that's the way Istio knows where to inject the sidecar pods. Run 'kubectl label namespace default istio-injection=enabled' to enable it, or 'kubectl label namespace default istio-injection=disabled' to explicitly mark it as not needing injection. Now run istioctl analyze And, this is the expected output: Time to deploy some resources. Execute kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml The previous command will create the following resources. See the screenshot below Make sure everything is up and running before continuing, execute kubectl get pods -A to verify. And… voila! There we have two containers per pod. Note that the Kubernetes Gateway API CRDs do not come installed by default on most Kubernetes clusters, so make sure they are installed before using the Gateway API: kubectl get crd [gateways.gateway.networking.k8s.io]() &> /dev/null || \\ { kubectl kustomize "[github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.6.1]()" | kubectl apply -f -; } If using Minikube, remember to open a tunnel! minikube tunnel Its gateway time: kubectl apply -f [samples/bookinfo/networking/bookinfo-gateway.yaml]() Visualize your service mesh with Kiali Okey-dokey, now it's time for some service mesh visualization, we are going to use Kiali. Execute the following kubectl apply -f samples/addons The previous command will create some cool stuff listed below: kubectl rollout status deployment/kiali -n istio-system Check it out with kubectl -n istio-system get svc kiali Everything look good? Cool. Now it's time to navigate through the dashboard, execute istioctl dashboard kiali , and go to your browser. If you’re testing this on a non-productive (meaning, without traffic) site then its going to look empty and boring since we don't have any traffic flowing. Check your ip with minikube ip And execute the following exports: export INGRESS_HOST=$(minikube ip) export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') export TCP_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].nodePort}') Awesome, now we can curl our app and see what happens curl "http://$INGRESS_HOST:$INGRESS_PORT/productpage”, fair enough, but lets get some more traffic with a while loop as follows: while sleep 0.01;do curl -sS 'http://'"$INGRESS_HOST"':'"$INGRESS_PORT"'/productpage'\\ &> /dev/null ; done Alright, now‌ at the screenshot below Kiali provides us with a useful set of visual tools to better understand our workload traffic. On the second screenshot we can see the power of Kiali; the white dots on top of the green lines represent the traffic (even though it's a static image, picture those dots moving in different directions and speeds!). In summary, Istio provides us with a powerful set of tools. On this TeraTip we saw a brief introduction to Istio Service Mesh. We focused our attention on installing it on our cluster and on gaining the visualization of some basic traffic flows. Stay tuned for more! References https://istio.io/latest/docs/ https://istio.io/latest/docs/examples/bookinfo/ Tomás Torales Cloud Engineer Teracloud If you want to know more about Kubernetes, we suggest going check Enhance your Kubernetes security by leveraging KubeSec If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • EKS Pricing: How to use Kubecost in an EKS Cluster

    Kubecost is an efficient and powerful tool that allows you to manage costs and resource allocation in your Kubernetes cluster. It provides a detailed view of the resources used by your applications and helps optimize resource usage, which can ultimately reduce cloud costs. In this document, we’ll guide you through the necessary steps to use Kubecost in your Kubernetes cluster. Let’s dive in. Deploy Kubecost in Amazon EKS Step #1: Install Kubecost on your Amazon EKS cluster Step #2: Generate Kubecost dashboard endpoint Step #3: Access cost monitoring dashboard Overview of available metrics Final thoughts Deploy Kubecost in Amazon EKS To get started, follow these steps to deploy Kubecost into your Amazon EKS cluster in just a few minutes using Helm. Install the following tools: Helm 3.9+, kubectl, and optionally eksctl and awscli. You have access to an Amazon EKS cluster. To deploy one, see Getting started with Amazon EKS. If your cluster is running Kubernetes version 1.23 or later, you must have the Amazon EBS CSI driver installed on your cluster. Step# 1: Install Kubecost on your Amazon EKS cluster. In your environment, run the following command from your terminal to install Kubecost on your existing Amazon EKS cluster. helm upgrade -i kubecost \ oci://public.ecr.aws/kubecost/cost-analyzer --version 1.99.0 \ --namespace kubecost --create-namespace \ -f https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/develop/cost-analyzer/values-eks-cost-monitoring.yaml Note: You can find all available versions of the EKS-optimized Kubecost bundle here. We recommend finding and installing the latest available Kubecost cost analyzer chart version. By default, the installation includes certain prerequisite software including Prometheus and kube-state-metrics. To customize your deployment (e.g., skipping these prerequisites if you already have them running in your cluster), you can find a list of available configuration options in the Helm configuration file. Step #2: Generate the Kubecost dashboard endpoint. After you install Kubecost using the Helm command in step 2, it should take under two minutes to complete. You can run the following command to enable port-forwarding to expose the Kubecost dashboard: kubectl port-forward --namespace kubecost deployment/kubecost-cost-analyzer 9090 Step #3: Access cost monitoring dashboard. On your web browser, navigate to http://localhost:9090 to access the dashboard. You can now start tracking your Amazon EKS cluster cost and efficiency. Depending on your organization’s requirements and set up, there are several options to expose Kubecost for on-going internal access. There are a few examples that you can use for your references: Check out the Kubecost documentation for Ingress Examples as a reference for using Nginx ingress controller with basic auth. Consider using the AWS Load Balancer Controller to expose Kubecost and use Amazon Cognito for authentication, authorization, and user management. You can learn more this How to use Application Load Balancer and Amazon Cognito to authenticate users for your Kubernetes web apps -Overview of available metrics The following are examples of the metrics available within the Kubecost dashboard. Use Kubecost to quickly see an overview of Amazon EKS spend, including cumulative cluster costs, associated Kubernetes asset costs, and monthly aggregated spend. -Cost allocation by namespace View monthly Amazon EKS costs as well as cumulative costs per namespace and other dimensions up to the last 15 days. This enables you to better understand which parts of your application are contributing to Amazon EKS spend. -Spend and usage for other AWS Services associated with Amazon EKS clusters View the costs of AWS infrastructure assets that are associated with their EKS resources. -Export Cost Metrics At a high level, Amazon EKS cost monitoring is deployed with Kubecost, which includes Prometheus, an open-source monitoring system and time series database. Kubecost reads metrics from Prometheus, then performs cost allocation calculations, and writes the metrics back to Prometheus. Finally, the Kubecost front end reads metrics from Prometheus and shows them on the Kubecost user interface (UI). The architecture is illustrated by the following diagram: -Kubecost reading metrics With this pre-installed Prometheus, you can also write queries to ingest Kubecost data in your current business intelligence system for further analysis. You can also use it as a data source for your current Grafana dashboard to display Amazon EKS cluster costs that your internal teams are familiar with. To learn more about how to write Prometheus queries, review Kubecost’s documentation or use example Grafana JSON models in the Kubecost Github repository as references. -AWS Cost and Usage Report (AWS CUR) integration To perform cost allocation calculations for your Amazon EKS cluster, Kubecost retrieves the public pricing information of AWS services and resources from AWS Price List API. You can also integrate Kubecost with the AWS CUR to enhance the accuracy of pricing information that is specific to your AWS account (e.g., Enterprise Discount Programs, Reserved Instance usage, Savings Plans, and Spot usage). You can learn more on how the AWS CUR integration works at AWS Cloud Integration. -Cleanup You can uninstall Kubecost from your cluster with the following command. helm uninstall kubecost --namespace kubecost Final thoughts Implementing Kubecost in your Amazon EKS cluster can significantly enhance your cost management and resource optimization efforts. By providing a comprehensive view of resource usage and associated costs, Kubecost empowers you to make informed decisions on optimizing resource allocation, which can lead to reduced cloud costs. Its easy deployment process using Helm makes it accessible to users with various levels of expertise. Additionally, Kubecost's integration with Prometheus enables you to leverage your existing business intelligence systems and Grafana dashboards for further analysis and visualization. Overall, Kubecost proves to be an invaluable tool for cost-conscious organizations seeking to maximize their Amazon EKS cluster's efficiency while keeping cloud expenditures in check. Give Kubecost a try today and take control of your Kubernetes cost management with ease. Martín Carletti Cloud Engineer Teracloud If you want to know more about Kubernetes, we suggest going check Conftest: The path to more efficient and effective Kubernetes automated testing If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to gain control over your Pull Request in Azure DevOps in 5 steps

    By leveraging Azure Functions, webhooks, and pull request configurations, you can efficiently validate branches across multiple repositories without the need for separate pipelines. Let’s learn how. Azure DevOps is a powerful cloud-based platform that offers a wide range of development tools and services. Nevertheless, when it comes to running pipelines across multiple repositories, Azure DevOps has certain limitations that make it cumbersome to perform build validations on specific branches existing in multiple repositories. But fear not! We have a solution that will save you time and effort. In this guide, we'll take you through the steps to set up this solution. You'll learn: How to obtain an authentication token Prepare the Azure Function code Configure webhooks. Set up the pull request protection policy. Following these steps will streamline your build validation process and make your Azure DevOps workflows a breeze. What you’ll need Here you have the magic ingredients: 1 Azure DevOps Account. 1 Webhook. 1 Azure Function. 1 Token. To achieve our goal successfully in this lab, we’ll follow a series of steps. Step # 1: Create a token First of all, we must create a Token to be used in an Azure Function. This function will be triggered whenever a Pull Request is created, thanks to the webhook that connects Azure Function and Azure DevOps. (Don't worry, it's much simpler than it sounds) Let's get started by following these instructions: Log in to your Azure DevOps account. Navigate to your profile settings by clicking on your profile picture or initials in the top-right corner of the screen. From the dropdown menu, select "Security". In the "Personal access tokens" section, click on "New Token". Provide a name for your token to identify its purpose. Choose the desired organization and set the expiration date for the token. Under "Scope", select the appropriate level of access needed for your token. For example, if you only need to perform actions related to build and release pipelines, choose the relevant options. Review and confirm the settings. Once the token is created, make sure to copy and securely store it. Note that you won't be able to view the token again after leaving the page. So be careful! You can now use this token in your Azure Function or other applications to authenticate and access Azure DevOps resources. Step # 2: Prepare the Azure Function Click on the "Create a resource" button (+) in the top-left corner of the portal. In the search bar, type "Function App" and select "Function App" from the results. Click on the "Create" button to start the creation process. In the "Basics" tab, provide the necessary details: Subscription: Select your desired subscription. Resource Group: Select a name for the Resource Group. Function App name: Enter a unique name for your function app. Runtime stack: Choose .NET Region: Select the region closest to your target audience. Click on the "Next" button to proceed to the "Hosting" tab. Configure the hosting settings: Operating System: Windows Plan type: Select the appropriate plan type (Consumption, Premium, or Dedicated). Storage account: Create a new storage account or select an existing one. Click on the "Review + Create" button to proceed. Review the summary of your configuration, and if everything looks good, click on the "Create" button to create the Azure Function. The deployment process may take a few minutes. Once it's completed, you'll see a notification indicating that the deployment was successful. Navigate to the newly created and change the code for the following one (.NET code): using System; using System.Net; using System.Net.Http; using System.Net.Http.Headers; using System.Text; using Newtonsoft.Json; # Add your PAT (Token) private static string pat = ""; public static async Task Run(HttpRequestMessage req, TraceWriter log) { try { log.Info("Service Hook Received."); // Get request body dynamic data = await req.Content.ReadAsAsync

  • What is Automation? The underlying value of process optimization

    In today's digital landscape, automation has become a powerful tool for optimizing processes and driving operational efficiency. Its significance is particularly pronounced in cloud computing, where organizations strive to stay competitive, agile, and secure. Businesses across all sectors are harnessing the potential of automated systems to streamline workflows, reduce costs, and enhance productivity. In this blog post, we explore the various types of automation systems, delve into the benefits of cloud automation, and highlight how automation is transforming the cloud computing landscape. By embracing automation, organizations can unlock the true potential of cloud computing and propel themselves to new heights of success. What is Industrial Automation Types of Automation Systems What is Process Automation IT Automation Benefits of Cloud automation What is Industrial Automation Industrial automation, or basic automation, is the use of machines or computers to help optimize, speed up, and deliver repetitive tasks with minimal human interference by way of controlling systems specifically for manufactured goods. The need for greater adoption of automation systems is growing by the year—when not the quarter. Sectors all across the board are aware of the growing importance of adopting automated systems to stay competitive, reduce costs, and optimize workforce efficiency. But automation is a broad topic, and with the recent rise of artificial intelligence, all the automation attention is skewing toward trending topics like machine learning and large language models. For every industry, the manufacturing industry that is, automation systems can be of great help. Types of Automation: Fixed, Programmable, Flexible Fixed Automation A fixed automation is when the sequence of a given manufacturing process automates fixed conditions, difficult to adjust or configure once initiated. Conditions for this kind of automation are set before their deployment and are known for delivering large outputs or products at the expense of costly equipment that does all the work. This kind of automation delivers great output, is but is expensive and inflexible. Programmable Automation Different from fixed automation, programmable automation is more flexible when it comes to conditioning and the configuration of a given process. With automation being programmable updates to a given process or sequence of events become customizable with the burden of downtime rather than the burden of cost. The downtime of the machines or instruments on the receiving end of programmability slows down the production rate. That’s why this kind of automation is most common among processes that carry out batch production, particularly of manufactured goods. Flexible Automation Flexible automation is well… flexible; even more than the automation mentioned above. As part of the types of manufactured automation, it’s the one with the most versatility when it comes to process and sequence customization. This kind of automation is also known as soft automation. Every machine component is fed programming code and is operated by a single human. This kind of automation makes reconfiguring less expensive and product changeovers delivered quickly. What is Process Automation? Process automation is still close to the manufacturing process. This kind of automation can adapt to all stages of the manufacturing process to optimize each, be that inspection, testing, or assembly. But different from Industrial automation, instead of automating a series of elements or products, process automation is closer to smaller groups of delivery where the output is reduced batch operations. IT Automation IT Automation is the use of IT systems where software and computer technology create repeatable instructions and processes otherwise handled by humans. Today’s fast-paced markets not only require quick delivery for customer service but also swift teamwork and operational agility. IT automation engulfs everything interacting with workforce coordination which makes it a key component for digital transformation and business success. It’s defined as the use of software and technology to optimize and accelerate typical business processes; whether that was the onboarding of a new hire, streamlining a task management pipeline, or coordinating product development teams more effectively. Of course, where process optimization is most needed is in meeting customer and market needs which both serve as beacons for companies to become more agile and relentless business structures than ever. IT Automation in the Cloud Automation specific to the Cloud refers to any repetitive task that deals with the different areas of the cloud. Those can be tasks at a network level, infrastructure, Cloud provisioning, application deployment, or configuration management. Benefits of Cloud automation Cloud automation is a resourceful, optimal, and innovative way to modernize an enterprise from end to end. The underlying principle of automation is a question of holistic efficiency. Automation at an IT level can help your teams of Cloud engineers with repetitive tasks and alleviate them from spending time on manual processes that could be spent on more urgent matters. Streamlining Provisioning Automation streamlines the provisioning process, eliminating manual and error-prone tasks. It enables businesses to quickly and consistently deploy resources, such as virtual machines, storage, and networks, with minimal human intervention. Automation also improves resource allocation by intelligently assigning resources based on predefined rules and policies. It ensures that resources are provisioned to the right departments or applications, preventing resource bottlenecks and optimizing performance. By using standardized templates and predefined configurations, businesses can ensure that every resource is provisioned consistently and reliably, minimizing configuration drift and potential vulnerabilities. Efficient Configuration Management Automation tools and scripts can rapidly deploy and configure cloud resources, saving time and effort compared to manual configuration. This not only speeds up the provisioning process but also enables faster response to changing business needs. Automation improves system reliability by enabling proactive configuration monitoring and remediation. Automated tools can continuously monitor configurations, detect deviations from the desired state, and automatically initiate corrective actions. This helps organizations maintain the desired configuration and promptly address any anomalies, thereby enhancing system reliability and minimizing downtime. Seamless of Orchestration One technical advantage is the ability to leverage Infrastructure as Code (IaC) practices. With automation tools like Terraform or AWS CloudFormation, organizations can define their infrastructure configurations as code. This allows for version control, reproducibility, and easy collaboration. Infrastructure changes can be made through code, facilitating automation and reducing the risk of configuration drift. Automation also enables continuous integration and continuous deployment (CI/CD) pipelines in cloud orchestration. CI/CD pipelines automate the building, testing, and deployment of applications and infrastructure changes. This ensures faster time-to-market, reduces human error and promotes DevOps practices by fostering collaboration between development and operations teams. Smooth IT Migration Automation enables comprehensive testing and validation of the migrated environment. Automated testing tools can simulate user interactions, verify application functionality, and perform performance and load testing. This ensures that the migrated applications perform optimally and meet the desired performance metrics. Additionally, automation supports continuous monitoring and optimization of the migrated infrastructure. Automated monitoring tools can collect and analyze performance data, detect anomalies, and trigger automated responses. This allows for proactive identification and resolution of issues, ensuring the stability, availability, and cost-efficiency of the migrated environment. Agile Application Deployment One of the primary advantages is the ability to automate the entire application deployment process, from building and packaging the application to deploying it in the cloud environment. Continuous Integration and Deployment (CI/CD) pipelines, powered by automation tools like Jenkins or GitLab, enable developers to automatically trigger builds, run tests, and deploy applications based on predefined workflows and triggers. This streamlines the deployment process, reduces manual errors, and accelerates time-to-market. Automation also allows for the seamless and consistent deployment of applications across multiple cloud instances. Tools like Kubernetes or AWS Elastic Beanstalk enable containerization and orchestration, automating the deployment, scaling, and management of applications. This ensures consistency in deployment configurations, simplifies application scaling, and enhances the reliability and availability of applications. Robust Security and Compliance Automation enables continuous security monitoring and threat detection. Tools like AWS GuardDuty or Azure Security Center automate the collection and analysis of security logs, network traffic, and application behavior to identify potential security threats or anomalies. Automated alerts and responses can be triggered to mitigate security risks promptly. Automation also facilitates the implementation of access controls and identity management. Tools like AWS Identity and Access Management (IAM) or Azure Active Directory automate the provisioning and revocation of user access, ensuring proper authentication and authorization for cloud resources. This reduces the risk of unauthorized access and helps enforce compliance with regulatory requirements. Furthermore, automation assists in regulatory compliance by automating audit trails and generating compliance reports. Automated tools can track and log all changes made to cloud resources, providing a comprehensive audit trail for compliance audits. They can also generate compliance reports, such as Payment Card Industry Data Security Standard (PCI DSS) or General Data Protection Regulation (GDPR) reports, by consolidating and analyzing relevant data. Final Thoughts Automation plays a pivotal role in revolutionizing cloud computing, providing numerous benefits across various aspects of the technology stack. From provisioning and configuration management to orchestration, IT migration, application deployment, and security and compliance, automation enhances efficiency, reliability, scalability, and security in cloud environments. Overall, automation empowers organizations to leverage the full potential of cloud computing, driving operational efficiency, cost optimization, agility, and robust security. By embracing automation, businesses can unlock the true benefits of cloud computing and stay ahead in an increasingly competitive digital landscape. Paulo Srulevitch Content Creator Teracloud If you want to know more about our blogs, we suggest going check What is Digital Transformation? Learn why it matters If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • What is ChatOps? Integrating Slack, Opsdroid, and Lambda

    ChatOps refers to people, bots, processes, and automated programming, all combining efforts to provide integrated solutions that further optimize output. ChatOps is a great solution to automate routine tasks and provide quick access to information across teams and team members. In this post, you’ll learn: What is ChatOps About Opsdroid How to build ChatOps What is ChatOps? ChatOps is a collaborative approach combining chat platforms with DevOps practices, allowing teams to streamline workflows, improve communication, and enhance productivity. It involves using chat tools as a central hub for real-time communication, collaboration, and automation to manage and coordinate tasks, deployments, and incident response. In ChatOps environments, chat rooms become dynamic spaces where conversation-driven collaboration takes center stage. The benefits of ChatOps become evident as team members seamlessly interact in these virtual environments, leveraging specialized ChatOps tools to navigate through workflows and enhance overall productivity. That's how the essence of group chat transforms into a collaborative powerhouse, providing a comprehensive full view of ongoing processes, from initial ideation to the final stages of code deployment. With the integration of scripts and plugins, tasks are automated, amplifying the efficiency of operations within these collaborative chat rooms. Whether using popular platforms like Microsoft Teams or other chat tools, the cohesive integration of ChatOps not only enhances team collaboration but also empowers teams with a holistic approach to managing tasks and optimizing their collective efforts. About Opsdroid Opsdroid is an open-source chatbot framework written in Python and designed to simplify the development and management of conversational agents. It provides a flexible and extensible platform for building chatbots that can interact with users across various messaging platforms like Telegram, Facebook, Slack, and many others. Key Features: Support for multiple messaging platforms like Slack, Telegram, Gitter, and more. Allows you to take events from chat services and other sources and execute Python functions called Skills, based on their contents Leveraging natural language processing (NLP) libraries for message interpretation. Automation capabilities for complex conversations and state management. Suitable for both simple command-based bots and sophisticated chatbot applications. How to build ChatOps As we said before, ChatOps is a collaboration between several parts, like people, bots, processes, and automation. In this case, we’ll integrate Opsdroid chatbot on Slack to execute Skills through messages on a Slack channel. Step # 1: Configure Slack The first step will be creating a new Slack App here (you must have an account), give it a name, and select the workspace you’d like to work in. Select the “Bots” option inside the “Add features and functionality” tab. Step # 2: Add scopes 2) Click “Review Scopes to Add” and add the following scopes under “Bot Token Scopes”: channels:history View messages and other content in public channels that TMC has been added to channels:read View basic information about public channels in a workspace chat:write Send messages as BOT chat:write.customize Send messages as BOT with a customized username and avatar commands Add shortcuts and/or slash commands that people can use groups:read View basic information about private channels that BOT has been added to im:read View basic information about direct messages that BOT has been added to incoming-webhook Post messages to specific channels in Slack mpim:read View basic information about group direct messages that BOT has been added to reactions:write Add and edit emoji reactions users:read View people in a workspace With these scopes you’ll allow the chatbot access to the channel, give thread responses, and execute commands. Depending on what you want to do you’ll need to enable or disable more scopes. Step # 3: Install workspace Navigate to “OAuth Tokens for Your Workspace” and click the “Install to Workspace” button and select which channel the Bot has access to post. After this take note of the “Bot User OAuth Access Token” as this will be used later for Opsdroid as a bot-token: Step # 4: Name your token 4) From the menu on the left select “Socket Mode”. Enable it and it will ask for a name for the Token, then will show you the token generated, copy it as this will be used later for Opsdroid as an app-token: Step # 5: Subscribe to events Now we must subscribe to events in your new Slack App, so Opsdroid can receive those events. From the menu on the left select “Event Subscriptions”: And bellow you need to subscribe to the bot to message:channels and save the changes. How to work around Opsdroid Step # 1: Install and configure Opsdroid Opsdroid is a FOSS project written in Python and to get installed it's as simple as: $ pip3 install opsdroid[all] It's not necessary, but you can start it to automatically create the configuration files. $ opsdroid start If you want or need some personalization on the installation you can watch this video. Step # 2: Connect to Slack Once installed we will need to configure a connector for Slack, this is through the file called configuration.yaml. To see where is located execute: $ opsdroid config path/home/facu/.config/opsdroid/configuration.yaml Open it with your favorite editor or with the command opsdroid config edit Step # 3: Configure Opsdroid Options Here you will see that Opsdroid offers a lot of configuration options that you can customize, like the default room (the channel), or answer creating a thread, but for the moment we just want to paste the tokens that we generated in Slack as values for the bot-token and app-token: welcome-message: true connectors:  slack:# # required    bot-token: "xoxb-9876543210-5265939284067-MyFantasyToken"# # optional    socket-mode: true    app-token: "xapp-1-TOKENAPP-7685941351-02030874446290d9128631e59a1cf163c1ea45ba5a"    start-thread: true    bot-name: "mybot"    default-room: "#test" Step # 4: Give Opsdroid skills What makes Opsdroid a very powerful tool is that you can create Skills, which are Python functions that you write which describe how the bot should behave when it receives new events like Slack commands that we want. Above in the same configuration.yaml file you will find the Skills section where you can add entries with the path to your skill file or folder: ## Skill modulesskills:  ## Hello (https://github.com/opsdroid/skill-hello)  hello:    name: helloskill    path: /home/facu/Escritorio/ops/helloskill.py Now let’s create our first “hello” skill that will be based on the hello skill from the Opsdroid documentation. from opsdroid.skill import Skillfrom opsdroid.matchers import match_regexfrom opsdroid.events import Reaction, Messageclass HelloSkill(Skill):    @match_regex(r"Hello, I'm (?P\w+)")    async def hello(self, message):        name = message.entities["name"]["value"]        await message.respond("Hi " + name + ", I'm a bot, how can I help you?") Notice how we made use of match_regex, this will help us match the message from the user against a regular expression. Opsdroid has a huge variety of other matchers like crontab that allows you to schedule skills to be called on an interval instead of being triggered by events or other matchers that aim to get a more natural communication like IBM Watson or SAP Conversational AI. Step # 5: Test your Opsdroid skills To test it you must start Opsdroid with the command opsdroid start and write in the channel that you specify before and get a response from the bot. You should get a response like this: From here, you can continue creating Skills that fit your needs like implementing skills that automate routine tasks and provide quick access to information. For example, it can retrieve data from databases, execute scripts, or interact with APIs based on user commands. Or you can create a skill that interacts with infrastructure management tools like Terraform or Ansible, executing infrastructure provisioning or configuration changes, providing updates on the status of infrastructure operations, and simplifying communication between teams. If you work with AWS you can even create a skill that makes use of the AWS SDK for Python (Boto3) to create, configure, and manage AWS services! Here is an example of a generic skill that triggers an AWS Lambda function: from opsdroid.skill import Skillfrom opsdroid.matchers import match_regeximport boto3class AWSLambdaSkill(Skill):   def __init__(self, opsdroid):       super().__init__(opsdroid)       self.lambda_client = boto3.client('lambda', region_name='your-aws-region')   def invoke_lambda_function(self, function_name, payload):       response = self.lambda_client.invoke(           FunctionName=function_name,           Payload=payload       )       # Process the response as per your requirements       # You can access the response data using response['Key']       return response  @match_regex(r'^invoke lambda function')async def invoke_lambda_function_message(self, message):   function_name = 'your-lambda-function-name'   payload = '{"key": "value"}' # Replace with your payload data   response = self.invoke_lambda_function(function_name, payload)   # Process the response or send a reply to the user   await message.respond(f"AWS Lambda function invoked. Response: {response}") Note that this is a basic implementation, and you may need to handle exceptions, authentication, and other aspects based on your specific use case. Additionally, you'll need to configure AWS credentials for boto3 to access your AWS account. Make sure to replace 'your-aws-region', 'your-lambda-function-name', and '{"key": "value"}' with your actual AWS Region, Lambda function name, and payload data, respectively, and also add the Skill in your Opsdroid configuration.yaml file. Final Thoughts This is just one example of how Opsdroid can be used. Its flexibility and extensibility through plugins and adapters allow you to tailor its use to your specific needs and integrate it with various tools and systems within your infrastructure. Facundo Montero Cloud Engineer Teracloud If you want to know more about our tips, we suggest checking Why you should use IMDSv2 If you are interested in learning more about our TeraTips or our blog's content, we invite you to see all the content entries that we have created just for you.

  • Optimize your costs with AWS spot instances and Terraform in just a few steps

    When looking to reduce your cloud expenses, spot instances are definitely one of the best alternatives, today I’ll show you how you can leverage this for your non-production environments with Terraform. Spot instances can prove to be an excellent approach for reducing cloud expenses in non-production settings. But don’t forget it’s essential to consider the spot allocation strategy, instance types, and potential hazards while configuring spot instances. Let’s have a look. Step# 1: Configure AWS launch_template and autoscaling_group First, we need to configure our aws_launch_template and aws_autoscaling_group resources: resource "aws_launch_template" "Lt" { name_prefix = "asg-testLT-" image_id = var.ami_id ebs_optimized = "false" instance_type = var.instance_type key_name = var.key_name security_groups = var.security_groups user_data = var.user_data metadata_options { http_endpoint = "enabled" http_tokens = "required" http_put_response_hop_limit = 2 } network_interfaces { associate_public_ip_address = var.public_ip security_groups = var.security_group } iam_instance_profile { arn = var.instance_profile_arn } block_device_mappings { device_name = "/dev/xvda" ebs { volume_size = var.root_vol_size volume_type = "gp2" delete_on_termination = true } } } And our autoscaling group: resource "aws_autoscaling_group" "asg" { vpc_zone_identifier = data.aws_subnet_ids.private.ids name = var.asgName max_size = var.maxSize min_size = var.minSize health_check_grace_period = 100 default_cooldown = var.cooldown health_check_type = var.healthCheckType desired_capacity = var.desiredSize capacity_rebalance = true force_delete = true mixed_instances_policy { instances_distribution { spot_allocation_strategy = "capacity-optimized-prioritized" } launch_template { launch_template_specification { launch_template_id = aws_launch_template.Lt.id version = "$Latest" } override { instance_type = "t3.large" } override { instance_type = "t3a.large" } override { instance_type ="t2.large" } } } termination_policies = [ "OldestInstance", "OldestLaunchConfiguration", "Default", ] } Step #2: Define a mixed_instance_policy To use Spot instances in our EC2 auto scaling group resource, we need to define a mixed_instances_policy block, that is conformed by the instances_distribution block, and the launch_template block. The instances_distribution block defines how we want to mix on-demand and spot-instances, in our case we want to use 100% spot instances, but there can be business cases when we want a percentage of on-demand instances (critical workflows). The spot allocation block strategy defines how we want AWS to choose spot instances for our workflows (price, instance type, etc) . We can also define strategies for the on-demand allocation and spot pooling. The launch_template block references the launch template needed for the autoscaling group, in this case we override the instances that we want for our workflow. Step #3: Choose your spot allocation strategy There are many important factors to consider when we are implementing spot instances: The spot allocation strategy is very important and it depends on the nature of the workflow, the most common strategies are the following: priceCapacityOptimized: This means that AWS will request Spot Instances from the pools that it believes have the lowest chance of interruption in the near term. Spot Fleet then requests Spot Instances from the lowest priced of these pools. capacityOptimized: With Spot Instances, pricing changes slowly over time based on long-term trends in supply and demand, but capacity fluctuates in real time. The capacityOptimized strategy automatically launches Spot Instances into the most available pools by looking at real-time capacity data and predicting which are the most available. lowestPrice: The Spot Instances come from the lowest priced pool that has available capacity. This is the default strategy. However, we recommend that you override the default by specifying the priceCapacityOptimized allocation strategy. In this case the capacityOptimized strategy was chosen over the other strategies, because the specific workloads depend on certain types of instances to work properly, that is why we override the instance types in the launch template block. The fear of using spot instances is how they can interfere with the work of developers for non-production environments and how this can affect productivity, that is why we have to choose carefully the type of instances for our spot pool, we can do so using the AWS Spot advisor: https://aws.amazon.com/ec2/spot/instance-advisor/. Step# 4: Select instance type To choose an instance type for our workloads it is important to take into account the following: vCPU Memory Region Let’s say, that for this example an instance in the us-east-2 region was needed, with 2 vCPUs and 8GB of memory and with a tiny percentage of interruption possibilities, in the spot instance advisor we can find the t3.large instance, for this instance we have 70% of savings over on demand instances, and the disruption frequency is less than 5%, so with this we can ensure that our workloads are more protected from being terminated all the time. It is important to review and analyze instances that can fulfill our workload needs, so it is recommended to at least define more than 4 instances to ensure spot’s availability. Final thoughts To sum it up, spot instances can be a fantastic solution for lowering cloud costs in non-production environments. However, it's crucial to take into account the spot allocation strategy, instance types, and potential risks when setting up spot instances. By using helpful tools like the AWS Spot advisor, you can make well-informed choices and ensure that your workloads are both cost-effective and dependable. Remember, spot instances might not be the right fit for every situation, so it's important to assess your specific requirements and workloads before implementing them. Happy coding and see you in the Cloud! References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-allocation-strategy.html https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_group#instance_requirements Juan Bermudez Cloud Engineer Teracloud If you want to know more about Terraform, we suggest going check How to Restore a Previous Version of Terraform State If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to get started with Talisman in 4 simple steps.

    Looking for the best way to lock down your sensitive info? Talisman is a tool that installs a hook to your repository to ensure that potential secrets or sensitive information do not leave the developer’s workstation. At the starting point of our DevSecOps pipeline there are developers; remember, they’re humans! With this in mind, we must take care of our secrets. There are plenty of cases where sensitive information is accidentally pushed to our SCM—take a look into this sad story to get an idea of how bad a situation like that can all go. Here is where tools like Talisman become helpful. It validates the outgoing changeset for things that look suspicious - such as potential SSH keys, authorization tokens, private keys etc. Better yet, Talisman can also be used as a repository history scanner to detect secrets that have already been checked in so you can take an informed decision to safeguard secrets. Let’s take a look on how to get started. Step # 1: Install Talisman In the following demo we’re going to configure Talisman for a single project so we proceed with the installation. # Download the talisman installer script curl https://thoughtworks.github.io/talisman/install.sh > ~/install-talisman.sh chmod +x ~/install-talisman.sh Step # 2: Choose which script to execute This will depend on our needs pre-push vs pre-commit. (For this example we chose pre-push). # Install to our project cd teratip-talisman/ # as a pre-push hook ~/install-talisman.sh # or as a pre-commit hook ~/install-talisman.sh pre-commit Step # 3: Start the simulation Now, we’re going to simulate a sensitive information leak. # Make directory and generate some random data simulating sensitive info mkdir sec-files && cd sec-files echo "username=teracloud-user" > username echo "password=teracloud-password" > password.txt echo "apiKey=aPPs32988sab21SA1221vdsXeTYY_243" > ultrasecret echo "base64encodedsecret=aPPs32988sss67SA1229vdsXeTXY_27777==" > secret Step # 4: Deploy the changes and push Alright! We have some sensitive data in our repository, now lets commit the changes and push! Oops! Something went wrong! (or not!) Talisman scans our code before pushing and this is the result! It failed to push. You can also ignore these errors if you find it best. Just create a .talismanrc file as shown in the output of our latest command (git push) # Ignore a secret to allow the push into the remote repository vi .talismanrc # Paste the desired secret that Talisman scan will ignore and push to the repo fileignoreconfig: - filename: sec-files/password.txt checksum: 742a431b06d8697dc1078e7102b4e2663a6fababe02bbf79b6a9eb8f615529cb Disclaimer: Secrets creeping in via a forced push in a git repository cannot be detected by Talisman. A forced push is believed to be notorious in its own ways, and we suggest git repository admins to apply appropriate measures to authorize such activities. Tomás Torales Cloud Engineer Teracloud References: https://github.com/thoughtworks/talisman https://thoughtworks.github.io/talisman/docs �� Have a question? For more info go to the official Talisman docs https://thoughtworks.github.io/talisman/docs

bottom of page