top of page

148 items found for ""

  • Monitor your website using CloudWatch Synthetics

    LEVEL: INTERMEDIATE “You can use Amazon CloudWatch Synthetics to create canaries, configurable scripts that run on a schedule, to monitor your endpoints and APIs. Canaries follow the same routes and perform the same actions as a customer, which makes it possible for you to continually verify your customer experience.” [1] In this TeraTip, we are going to use a canary for monitoring a specific URL in order to know quickly if the website is up. So, let’s go to configure our canary… Creating the Canary The first step is to access the CloudWatch console. Here we can find Synthetics Canaries under the “Application monitoring” section in the left menu. Then, we are going to create a canary by doing click on the orange button “Create canary”, and luckily for us, already exists a blueprint called “heartbeat monitoring” which does exactly what we want: monitor the health of a website. Now, let’s take a look at the configuration options: First, we have the Name of the Canary, and after that, the most important value: the URL of the webpage that we want to monitor. As you see, it’s possible to add up to five endpoints to this canary. The “Take screenshot” option is, as its name suggests, for taking a screenshot of the website every time the canary runs. This is what the customer sees when they access the URL. The following section is the “Script editor”, and this is amazing because using the blueprint saves us from writing the whole script. In this case, the script is written in Nodejs and makes use of the Puppeteer library. Then, we need to define a schedule to run the canary. Here I’m choosing to run it continuously every 5 minutes and start immediately after the creation: Some important to mention is that the canaries have a (very low) cost for each execution. This is minimal, but you need to know it. You can check this cost in this AWS URL: Now is necessary to define a retention period and an S3 bucket for saving the results executions. As I just want to use this canary as a health check, I choose the minimum retention possible: 1 day for failure and successful executions. This S3 bucket is automatically created if is the first canary that you are creating in the account. Otherways, the existing bucket is used. Following the configuration, we need to set up the permissions for this canary. I choose “Create a new role” and this automatically creates the following role and policy attached to it: This role is assumed by the Lambda function that runs the script. Finally, we need to set up what we want to do when the health check fails. There are two possibilities: create a CloudWatch alarm and/or send a notification using SNS. I choosed to create an alarm that will be triggered every time the health check fails in the last 5 minutes. This alarm can be configured as usual, by setting up some action. For example, It’s possible to notify using an SNS topic, trigger some action over the EC2 instances that contains the web server or trigger a lambda function. To finish, click on the orange button "Create canary". Now, let’s take a look at what was created and what we can see in the monitoring of the canary. Resources created by the canary wizard We already see the policy and role used by the canary, and also the S3 bucket defined to store the information: This bucket will be created when we add the first canary and reused by the following ones. As well, we have the alarm that was defined in the canary: Fortunately, our website is up :), but if not, we will know quickly… But there is one more resource involved, and probably the most important: the lambda function that runs the script every time the canary is executed. If you re-read the definition of Canary in the very first paragraph of this tip, it says: “You can use Amazon CloudWatch Synthetics to create canaries, configurable scripts that run on a schedule, to monitor your endpoints and APIs…” And here is such script: You can check the code generated by using the blueprint chosen. Monitoring The first we will find on the Synthetics Canaries dashboard is a summary of the current state of all canaries that we have. In this case, there is just one: By accessing the canary will find specific and historical information about the last and previous executions: Even, we can find the screenshot of the website (if you remember to check “Take screenshot” in the canary configuration) and related logs: The aim of this TeraTip is to show an easy (as you could see), but not widely known, way to implement a health check monitoring of your URLs. As we can see, Canaries are very useful and they provide a lot of information that can help us to know the state of our website and take action faster. References [1] Using synthetic monitoring [2] Amazon CloudWatch Pricing [3] Creating a canary Ignacio Rubio Cloud Engineer Teracloud If you want to know more about Cloudwatch, we suggest going check Create custom Metrics for AWS Glue Jobs. If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Create custom Metrics for AWS Glue Jobs.

    As you know, CloudWatch lets you publish custom metrics from your applications. These are metrics that are not provided by the AWS services themselves. Traditionally, custom metrics were published to CloudWatch by applications by calling CloudWatch’s PutMetricData API, most commonly through the use of AWS SDK for the language of your choice. With the new CloudWatch Embedded Metric Format (EMF), you can simply embed the custom metrics in the logs that your application sends to CloudWatch, and CloudWatch will automatically extract the custom metrics from the log data. You can then graph these metrics in the CloudWatch console and even set alerts and alarms on them like other out-of-the-box metrics. This works anywhere you publish CloudWatch logs from, EC2 instances, on-prem VMs, Docker/Kubernetes containers in ECS/EKS, Lambda functions, etc. In this case, we center on custom metrics for AWS Glue Job execution. The final aim of this task is to create a Cloudwatch Alarm to identify if a Glue Job execution was successful or not. The proposed solution to this is the one shown in the following diagram. Infrastructure We will create the infrastructure and permissions needed with Terraform. resource "aws_cloudwatch_event_rule" "custom_glue_job_metrics" { name = "CustomGlueJobMetrics" description = "Create custom metrics from glue job events" is_enabled = true event_pattern = jsonencode( { "source": [ "aws.glue" ], "detail-type": [ "Glue Job State Change" ] } ) } resource "aws_cloudwatch_event_target" "custom_glue_job_metrics" { target_id = "CustomGlueJobMetrics" rule = arn = aws_lambda_function.custom_glue_job_metrics.arn retry_policy { maximum_event_age_in_seconds = 3600 maximum_retry_attempts = 0 } } resource "aws_lambda_function" "custom_glue_job_metrics" { function_name = "CustomGlueJobMetrics" filename = "python/" source_code_hash = filebase64sha256("python/") role = aws_iam_role.custom_glue_job_metrics.arn handler = "handler.handler" runtime = "python3.9" timeout = 90 tracing_config { mode = "PassThrough" } } resource "aws_lambda_permission" "allow_cloudwatch" { statement_id = "AllowExecutionFromCloudWatch" action = "lambda:InvokeFunction" function_name = aws_lambda_function.custom_glue_job_metrics.function_name principal = "" source_arn = aws_cloudwatch_event_rule.custom_glue_job_metrics.arn } resource "aws_iam_role" "custom_glue_job_metrics" { name = "CustomGlueJobMetrics" assume_role_policy = jsonencode( { Version : "2012-10-17", Statement : [ { Effect : "Allow", Principal : { Service : "" }, Action : "sts:AssumeRole" } ] }) } resource "aws_iam_role_policy" "custom_glue_job_metrics" { name = "CustomGlueJobMetrics" role = policy = jsonencode({ Version : "2012-10-17", Statement : [ { Effect : "Allow", Action : [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], Resource : "arn:aws:logs:*:*:*" } ] }) } We have already created an event rule, event target, and lambda function (where will run a and its needed permissions. It should be noted that we are using the default event bus. Python Code: The python code that will run in a lambda function, is the following: from aws_embedded_metrics import metric_scope @metric_scope def handler(event, _context, metrics): glue_job_name = event["detail"]["jobName"] glue_job_run_id = event["detail"]["jobRunId"] metrics.set_namespace(f"GlueBasicMetrics") metrics.set_dimensions( {"JobName": glue_job_name}, {"JobName": glue_job_name, "JobRunId": glue_job_run_id} ) if event["detail-type"] == "Glue Job State Change": state = event["detail"]["state"] if state not in ["SUCCEEDED", "FAILED", "TIMEOUT", "STOPPED"]: raise AttributeError("State is not supported.") metrics.put_metric(key=state.capitalize(), value=1, unit="Count" if state == "SUCCEEDED": metrics.put_metric(key="Failed", value=0, unit="Count") else: metrics.put_metric(key="Succeeded", value=0, unit="Count") This code will create a new namespace (GlueBasicsMetrics) within Cloudwatch metrics with two dimensions inside (JobName and JobName,JobRunId), and it will be updated each time the Glue Job is executed since this is the event trigger that causes the function to execute. Install module and libraries As you could see in the terraform code, we import the source code as a file called It is very important to highlight that the python code, the module, and libraries installed previously have to be compressed into this .zip file at the same path level. Installation: pip3 install aws-embedded-metrics Cloudwatch Alarm Great, now we have the necessary metrics, we must focus on the main objective of this article, CREATE A CLOUDWATCH ALARM to identify when the job execution failed. We will create this alarm with Terraform too. resource "aws_cloudwatch_metric_alarm" "job_failed" { alarm_name = "EtlJobFailed" metric_name = "Failed" namespace = "GlueBasicMetrics" period = "60" statistic = "Sum" comparison_operator = "GreaterThanOrEqualToThreshold" threshold = "1" evaluation_periods = "1" treat_missing_data = "ignore" dimensions = { JobName = "IotEtlTransformationJob" } alarm_actions = ["aws_sns_topic.mail.arn", "aws_sns_topic.chatbot.arn"] } We use as an example a list of mail and a chatbot like SNS topics. Consider that you must create the SNS topics needed to notify when the alarm is executed. I hope that this TeraTip will be useful for you and help you accomplish the Performance Efficiency Pillar and Operational Excellence Pillar of the Well-Architected Framework in your environment. Martín Carletti Cloud Engineer Teracloud If you want to know more about Cloudwatch, we suggest going check Your AWS invoice is getting bigger and bigger because of CloudWatch Logs, and you don't know why? If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.

  • Two tools for handling obsolete APIs in k8s

    LEVEL: INTERMEDIATE When we use Kubernetes to deploy services, we often find ourselves needing to update their APIs or perform a cluster upgrade. As a good practice, before performing any of these actions, we need to know the status of our current and future APIs, to validate if these actions will not affect the normal function of the applications. For this, today we bring you two tools that will facilitate this task: Kube No Trouble (kubent) and Pluto. APIs. Before continuing, it’s good to remember that Kubernetes is a system based on APIs, and in each version, these evolve based on new features. They have a versioning system that goes through three stages: Alpha, Beta and Stable. Eventually, within the Kubernetes deprecation policy, they may be checked as "deprecated" or "removed" in a future release of k8s. Identify outdated APIs Kubent: It’s a practical and easy tool that will allow us to identify if we are using obsolete versions of the APIs or next to be eliminated. Kubent will connect to our cluster to generate an easy reading report, indicating which are the deprecated APIs and which will be removed in future versions of Kubernetes. You can also specify files outside the cluster to be analyzed. Installation: sh -c "$(curl -sSL '')" Applications: Analyzing APIs on the active cluster: In this example, we are going to inquire about the status of the APIs in use using as target an upgrade to Kubernetes v1.25.0. $ kubent --target-version 1.25.0 We can see that we have two versions of APIs to update, the first ones were removed in version 1.22 and the rest no longer exist in version 1.25. Detect APIs over Kubernetes configuration files: $ kubent --filename nginx-deployment.yaml --cluster=false In this case, the report shows us that the API used was removed and replaced by the “apps/v1” version since K8s v1.9. Pluto. It’s another tool that comes to facilitate the task of identifying APIs, it can perform verification on the helm charts running in our cluster, as well as integrate into our CI/CD as part of the process to validate the obsolescence of your infrastructure as code. Installation: curl -sL "" | tar -zx --add-file pluto && sudo mv pluto /usr/local/bin/pluto && chmod +x /usr/local/bin/pluto Applications: Analyzing APIs on the active cluster: We are going to analyze the helm packages on the cluster and use as target objective k8s v.1.22. $ pluto detect-helm -o wide -t k8s=v1.22 In this case, like kubent, two API versions are detected that will be removed in version 1.22. Detect outdated APIs on configuration files in a specific directory: $ pluto detect-files The report tells us that the API used was removed and replaced by the "apps/v1" version. Analyze APIs on a helm chart before implementing it: $ helm template stable/nginx-ingress --version 0.11.1 | pluto detect -o wide -t k8s=v1.25 - Bonus track Plugin mapkubeapis: Mapkubeapis is a helm v3 plugin that allows upgrading of outdated apis versions over Kubernetes cluster. Installation: $ helm plugin install Applications: Simulate APIs updates. $ helm mapkubeapis grafana --dry-run --namespace default Apply an update to deprecated APIs. $ helm mapkubeapis grafana --namespace default Marcelo Ganin Cloud Engineer Teracloud If you want to know more about K8s, we suggest going check K8s Cluster Auto-scalers: Autoscaler vs Karpenter If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Streamlining Security with Amazon Security Hub: A where to start Step-by-Step Guide

    LEVEL: INTERMEDIATE Introduction Amazon Security Hub is a security service offered by Amazon Web Services (AWS) that aggregates and prioritizes security findings from multiple AWS services and third-party security tools, making it easier for customers to manage their security posture. One of the key benefits of using Amazon Security Hub is that it provides a centralized view of security findings from multiple sources. This allows customers to quickly identify and prioritize potential security issues, rather than having to navigate multiple separate security tools and services. Another benefit of Amazon Security Hub is that it integrates with other AWS services, such as AWS Config and Amazon GuardDuty, to provide additional security insights. For example, AWS Config can be used to assess the compliance of resources in an AWS account, while Amazon GuardDuty can be used to detect and respond to potential security threats. By integrating these services with Amazon Security Hub, customers can gain a more comprehensive understanding of their security posture and take more effective actions to improve it. Amazon Security Hub also provides automation capabilities, allowing customers to set up automatic remediation actions for certain types of security findings. This can help to quickly and efficiently address potential security issues, reducing the time and effort required to manually investigate and resolve each finding. Enabling Security Hub (console) Ref: When you enable Security Hub from the console, you also have the option to enable the supported security standards. To enable Security Hub Use the credentials of the IAM identity to sign in to the Security Hub console. When you open the Security Hub console for the first time, choose Enable AWS Security Hub. On the welcome page, Security standards lists the security standards that Security Hub supports. To enable a standard, select its check box. To disable a standard, clear its check box. You can enable or disable a standard or its individual controls at any time. For information about the security standards and how to manage them, see Security standards and controls in AWS Security Hub. Choose Enable Security Hub. Next Steps Configure integration with other AWS services. As mentioned earlier, Amazon Security Hub can integrate with other AWS services such as AWS Config and Amazon GuardDuty to provide additional security insights. To set up these integrations, customers will need to enable the relevant services in their AWS account and configure them to send findings to Security Hub. Set up custom actions and automated remediation. Once the integrations are set up, customers can create custom actions and automated remediation workflows to address specific types of security findings. For example, they can set up an automatic remediation workflow that terminates an EC2 instance when it is identified as compromised. Review and prioritize findings. Once Amazon Security Hub is set up and configured, it will start to aggregate and prioritize security findings from multiple sources. Customers should regularly review these findings and prioritize them based on their level of risk. By following these steps, you can effectively implement Amazon Security Hub and begin to improve your security posture by identifying and addressing potential security threats in a more efficient and streamlined way. Finally Thoughts In conclusion, Amazon Security Hub is a powerful security service that can help customers to improve their security posture by providing a centralized view of security findings from multiple sources, integrating with other AWS services, and providing automation capabilities for remediation. Implementing Amazon Security Hub requires setting up the service, configuring integrations, creating custom actions and automated remediation workflows, creating and assigning security standards, and regularly reviewing and prioritizing findings. If you need assistance with implementing Amazon Security Hub or have any questions about how it can help you to improve your security, please don't hesitate to reach out for help. We will be more than happy to assist you and provide you with the information you need to make the most of this powerful security service. Damian Gitto Olguín Co-Founder/CTO/AWS Hero Teracloud If you want to know more about AWS Security, we suggest going check What did AWS Re: Invent bring us in terms of Security? If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to get started with Terraform console

    Terraform console is a tool that provides Terraform to evaluate expressions or debug resources in a state in an interactive way. It’s a beneficial ally when we are working with Terraform functions and want to test which is the result before applying, or when you are working with resources or modules for which we are not sure which are the outputs and you want to debug what is returned. To use Terraform Console we need to write the command. Terraform console If we do not have a state in the folder where the terminal in which the command was run is located, it will work as a simple interpreter. On the other hand, if a state is found or one in the cloud is loaded, then it will read it to have the data of the resources at our command and start the interpreter. Let’s make an example of creating a VPC. Suppose we want to create a VPC with 4 subnets in a dynamic way. For this, we are going to install Terraform Console and make use of Terraform functions. First, we will use a function called “cidrsubnet” (, given the IP of the VPC, the additional number of Bits, and the subnet number that we want, it returns the CIDR block of the subnet. Let’s try the following example. If we change the subnet number the CIDR block increases, this can be made a little more dynamic using a range and a for. We could get the number of subnets we want. Let's see what happens with 4 subnets. And that's it. Here it returns our 4 subnets ready to be used in a resource or a module. Using the VPC module as an example, we would have to add something like this. And that's it, when we apply we will have our VPC with 4 public and 4 private subnets. In this way, we can use and test any terraform function and make sure that the results we want are really what we expected. Another good function of the Terraform console (the one I use the most) is to debug our resources that are already in the cloud. A quick example would be to see the values that data brings to us. Now, if we go to the Terraform console and ask for the same data in the following way, we will receive this. Like this, we can bring the data of all the resources, modules, and terraform data to see their values. It’s very useful when the infrastructure becomes very large and you want to see the exact values ​​that are being passed to other resources quickly without having to go through the code. Fabricio Blas Cloud Engineer Teracloud If you want to know more about Terraform, we suggest going check Terraform Workspaces If you are interested in learning more about our #TeraTips or our blog content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Reinforce the power of your AWS scanning with Trivy

    LEVEL: BEGGINNER As we already know, AWS counts with a useful tool to scan our images for vulnerabilities when we push them to our registry. On this TeraTip we are going to add an extra security layer: we are going to make use of an open-source tool called Trivy. Trivy is a comprehensive and versatile security scanner. Trivy has scanners that look for security issues and targets where it can find those issues. Targets (what Trivy can scan): Container Image Filesystem Git Repository (remote) Virtual Machine Image Kubernetes AWS Scanners (what Trivy can find there): OS packages and software dependencies in use (SBOM) Known vulnerabilities (CVEs) IaC issues and misconfigurations Sensitive information and secrets Software licenses Let us begin with a demo on docker image scanning. 1) Install Trivy. In my case, locally and since Im using a ubuntu distribution I will proceed with the following: sudo apt-get install wget apt-transport-https gnupg lsb-release wget -qO - | sudo apt-key add - echo deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sourc sudo apt-get update sudo apt-get install tri 2) Execute a Trivy -v to verify the installation. 3) Now, we can run trivy image ${our_image_to_scan} For example: trivy image adoptopenjdk/openjdk8:alpine-slim The output 4) Let's try another one, run trivy image php:8.1.8-alpine The output Ok, this looks a bit more dangerous. 5) Fair enough. Now would be helpful to automate these scans to use on our DevSecOps pipelines. Create a file. touch With your IDE of choice open the file and paste the following content: #!/bin/bash dockerImageName=$(awk 'NR==1 {print $2}' Dockerfile) echo $dockerImageName These initial lines are going to grab the docker image from the Dockerfile and echo it to the terminal. 6) We continue editing our script. Trivy command, we are checking for different types of severity on our vulnerabilities. If the exit code of our Trivy image scan is other than CRITICAL we’ll return an exit code of 0 meaning there were no critical vulnerabilities found on the image. If the exit code is 1, then we are going to know without a doubt that we have critical vulnerabilities in our image. trivy image --exit-code 0 --severity MEDIUM,HIGH $dockerImageName trivy image --exit-code 1 --severity CRITICAL $dockerImageName 7) The previous step is very delightful, but How do we leverage our DevSecOps pipelines with this information? Here is where we can take action on a building pipeline (or not) depending on our exit codes. Let's add the bash conditional. # Trivy scan result processing exit_code=$? echo "Exit Code : $exit_code" # Check scan results if [[ "${exit_code}" == 1 ]]; then echo "Image scanning failed. Vulnerabilities found" exit 1; else echo "Image scanning passed. No CRITICAL vulnerabilities found" fi; Alright! now we are able to scan our docker images and take action based on the exit code that relies on the vulnerabilities found. Let's take a look at the final script and how we can implement it on a Jenkins pipeline. #!/bin/bash dockerImageName=$(awk 'NR==1 {print $2}' Dockerfile) trivy image --exit-code 0 --severity MEDIUM,HIGH $dockerImageName trivy image --exit-code 1 --severity CRITICAL $dockerImageName # Trivy scan result processing exit_code=$? echo "Exit Code : $exit_code" # Check scan results if [[ "${exit_code}" == 1 ]]; then echo "Image scanning failed. Vulnerabilities found" exit 1; else echo "Image scanning passed. No CRITICAL vulnerabilities found" fi; Jenkinsfile #!/bin/bash pipeline { agent any stages { stage('Trivy Vulnerability Scan - Docker') { steps { sh "bash" } } } } Note: There are some necessary steps to configure Jenkins, install the required plugins, the dependencies, and so on, but since this is not a Jenkins TeraTip and for briefness purposes, we keep it as simple as possible. References: Tomás Torales Cloud Engineer Teracloud If you want to know more about Cloud Security, we suggest going check What did AWS Re: Invent bring us in terms of Security? If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Keep your S3 buckets safe in transit and in rest

    When using AWS S3, our favorite AWS service for keeping static files, we have two ways of uploading objects, by HTTP or HTTPS, when we (or our applications) store files using the HTTP endpoint of S3, all the traffic that we send to S3 travels unencrypted. If we use HTTP instead of HTTPS all requests and responses can be read by anyone monitoring the session so that any malicious actor can intercept the data. Never share your personal data on a website that doesn’t show the following: To avoid this problem on S3, there are two things that we can do: Always use the HTTPS endpoint, if you are using any AWS SDK that is the default behavior. Deny all HTTP traffic on your S3 bucket Policy. S3 bucket Policies are documents that allow you to protect access to all your S3 bucket objects, here is the policy that we need to enforce HTTPS traffic. Also, to ensure that all the files on our S3 buckets are encrypted at rest, we have two options: Set to default the encryption setting on the S3 bucket properties. Add a statement to the S3 bucket policy to deny all PutObject operations that do not contain the encryption header. Finally, when we build safe and reliable applications (not just for AWS S3), we must always ensure that all our data is encrypted in transit and in rest. Hope you have learned something new, happy coding and see you in the cloud! Juan Bermudez Cloud Engineer Teracloud If you want to know more about Cloud Security, we suggest going check What did AWS Re: Invent bring us in terms of Security? If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • What did AWS Re: Invent bring us in terms of Security?

    Re: Invent is the most anticipated event by the AWS community, not only because of the networking and relationships that are based there but also because it is time to learn first-hand what new tools or features AWS will offer us for this new cycle. In the case of security, the announcements have contemplated various tools, let's see what they are about. Amazon Security Lake Allows you to centralize security events automatically from cloud, on-premises, and custom security sources across Regions, giving you the chance to optimize and manage security data for more efficient storage and query performance. The current AWS services that provide info to compile logs activity are: Amazon VPC Amazon S3 AWS Lambda Amazon Route 53 AWS CloudTrail AWS Security Hub Also, by intermediating Security Hub, any service that has integration with it can send info to the data lake. The principal benefit of this data lake is that it allows you to analyze the data using your preferred analytics tools while retaining control and ownership of your security data. You can use Amazon Athena, Detective, OpenSearch or SageMaker from the AWS side, or any other 3rd party tool. This is possible because the data is normalized to an industry standard like Open Cybersecurity Schema Framework, avoiding vendor lock-in. Amazon GuardDuty RDS Protection It’s a threat detection for Amazon Aurora databases that allows you to identify potential threats to data stored in your Amazon Aurora databases. It uses machine learning by continuously monitoring existing and new Amazon Aurora databases in your organization. So now, you will easily identify if your DB users are having anomalous behavior, like trying to connect from outside the organization when always connecting from inside, or if your database is facing password spraying, or suffering brute force attacks trying to discover your user's passwords. It has a free trial and you shouldn’t have a database performance impact or require modifications to enable it. Amazon Inspector for Lambda Functions With this, Amazon Inspector is able to map vulnerabilities detected in software dependencies (CVE) used in AWS Lambda functions and in the underlying Lambda layers. It supports automatic exclusion for functions that haven’t been invoked during 90 days and manual exclusion based on tags. -it costs 0.30 U$S per function, per month (don’t need to pay extra per re-scan) Amazon Macie Automated Data Discovery The interactive S3 data map allows you to easily check the strength of your data security posture; how many buckets are encrypted, allow public access, etc. Another benefit of this map is that due that Macie now automatically scans bucket objects searching for sensitive data, you can check in the interactive map the report of sensitive data and sensitivity score for each bucket, providing you cost-efficient visibility into sensitive data stored in Amazon S3. It has a 30-day free trial and then It’s billed according to the total amount of objects in s3 for your account, on a daily basis. Amazon Verified Permissions It validates user identity through the integration with several trust providers allowing you to sync user profiles, attributes, and group memberships; and the accompaniment of fine-grained Permissions and authorization rules. This way it generates a security perimeter around the application, with policy and schema management It simplifies compliance audits at scale, identifies overprovisioned permissions, and connects to monitoring workflows that analyze millions of permissions across applications with the power of automated reasoning It allows you to build applications faster and support Zero Trust, architectures with dynamic, real-time authorization decisions based on the govern fine-grained permissions within applications and data with policy lifecycle management AWS KMS external key store (XKS) This feature has as its objective to provide users who want to protect their data with a ciphered key that isn’t stored in the cloud (due to country regulations or compliance requirements): it extends existing AWS KMS custom key store feature beyond AWS CloudHSM (customer-controlled, single-tenant HSM inside AWS datacenters) to keys in on-premises HSM, providing the same integration that KMS has with all the AWS services. AWS Config Proactive Compliance Proactively check for compliance with AWS Config rules prior to resource provisioning. Running these rules before provisioning, for example in an infrastructure-as-code CI/CD pipeline, you can earlier detect non-compliant resources, and this saves you time remediating non-compliant resources in the future when all the system is operative! AWS Control Tower – Comprehensive Controls Management By defining a map, and managing the controls required to meet the most common control objectives and regulations you can apply managed preventative, detective, and proactive controls to accounts and organizational units (OUs) by service, control objective, or compliance framework, reducing the time to vet AWS services from months or weeks to minutes AWS Control Tower Account Factory Customization (AFC) Previously, only standard settings were available for VPCs, etc., and customization required a combination of Customization for Control Tower, etc. Now, Service Catalog products can be specified when creating an account or adding an Account to Control Tower. The product is automatically deployed when an account is created, and the initial setup of the account is performed. Service Catalog products are defined in CloudFormation templates, allowing for flexible initial setup. If you are interested in learning more about these new features, you can check the playlist with the re: Invent sessions related to Security, compliance, and Identity. To learn about the top announcements click here. Lourdes Dorado Cloud Engineer Teracloud If you want to know more about Cloud Security, we suggest going check Best Security Practices, Well-Architected Framework If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Use Dependabot to get Slack notifications using GitHub Actions

    Dependabot and Github working together Dependabot is a tool integrated with GitHub which allows us to automate analysis and updates of dependencies in our projects. It works by analyzing dependency files in our projects and verifying that no newer versions exist in the official repositories. Then, it creates automated Pull Requests (PRs) for dependencies out-of-date. Dependabot works in three ways: Listing vulnerabilities in the dependencies used in a project. Creating PRs for solving these vulnerabilities using the minimum required versions. Creating PRs to keep all dependencies using the latest version of them. This Teratip is to implement Slack notifications about vulnerabilities detected and automated PRs using GitHub actions. Dependabot configuration Requirements: Have admin permissions on the repository. The first step is to configure Dependabot in our repository by following the next steps: Go to the Security tab in the repository. Go to Dependabot in the ‘Vulnerability alerts’ section. Click on Configure and Manage repository vulnerabilities settings. 4. Then, in the Dependabot section below “Code security and analysis” we are going to enable Dependabot alerts and Dependabot security updates. Note that the Dependency graph should be automatically enabled after enable Dependabot alerts option. At this point, Dependabot is enabled and it will start looking for vulnerabilities and create automated PRs. Slack configuration Requirements: Be logged in to your Slack workspace. In Slack, we need one channel to receive notifications and a Slack app with one incoming webhook URL to be used for our GH Actions. It is assumed that the Slack channel already exists, and it does not matter whether it is public or private, so let's create the App: Go to and click on the Create You Slack app button. Click the Create New App button and select the “From scratch” option. Choose a name for the App and select the workspace where the channel is. Then go to Incoming Webhooks and enable that option. Once Incoming webhooks are enabled you can Add New Webhook to Workspace. Select your Channel from the list and click on Allow. You should see something like this: GitHub Actions will use this Webhook URL. GitHub Actions configuration In this last step we will use three actions already created: To get notifications about PRs created by Dependabot: To get notifications about vulnerabilities detected by Dependabot: Since not all vulnerabilities can be resolved with automatic PRs, it is good to get notifications of all detected vulnerabilities. Now we need to create two workflows by adding the following YAML files in .github/workflows in the repository. dependabot-pr-to-slack.yaml name: Notify about PR ready for review on: pull_request: branches: ["main"] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: jobs: slackNotification: name: Slack Notification if: startsWith(github.head_ref, 'dependabot/') # This step only runs when PR has dependabot/ HEAD runs-on: ubuntu-latest steps: # Latest version available at: - uses: actions/checkout@v2.5.0 - name: Slack Notification # Latest version available at: uses: kv109/action-ready-for-review@0.2 env: SLACK_CHANNEL: dependabot-notifications SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }} This workflow runs every time that Dependabot creates a new PR. dependabot-vulns-to-slack.yaml name: 'Dependabot vulerabilities notification to Slack' on: schedule: - cron: '0 10 * * 1' # Cron # Allows you to run this workflow manually from the Actions tab workflow_dispatch: jobs: Notify-Vulnerabilites: runs-on: ubuntu-latest steps: # Latest version available at: - name: Notify Vulnerabilities uses: kunalnagarco/action-cve@v1.7.15 with: token: ${{ secrets.PERSONAL_ACCESS_TOKEN }} # This secret need to be created slack_webhook: ${{ secrets.SLACK_WEBHOOK }} # This secret need to be created This workflow runs periodically based on cron expression. As is commented in the code, we need to add two secrets in our repository to be used in these workflows: PERSONAL_ACCESS_TOKEN and SLACK_WEBHOOK. To add both secrets follow these steps: Go to the Setting tab in the repository. Go to Secret → Actions in the ‘Security’ section. Click in New repository secret and add the followings: The names chosen are used in workflows, so if they are modified, then change them also in the YAML files. Also, we need to add SLACK_WEBHOOK secret in Secret → Dependabot in the same way that it did before. SLACK_WEBHOOK value is the URL created previously. PERSONAL_ACCESS_TOKEN could be created following these steps: Click on your profile and select Setting. Go to Developer settings. Click on Personal access token and choose Tokens (classic). Click on Generate new token (classic). Select the following permissions: Click on Generate token and copy the generated token. This token can’t be visible later, so be sure to copy it at this time. For this workflow PERSONAL_ACCESS_TOKEN must belong to an admin collaborator of the repository. Checking notifications in Slack Dependabot vulnerability notifications example: Dependabot PRs notifications example: In the Security tab →”Vulnerability alerts” section and Dependabot we can confirm that alerts are related to detected vulnerabilities and the automated PRs created. Final Thoughts Leveraging Dependabot alongside GitHub Actions for Slack notifications offers a streamlined approach to staying informed about version updates within your project's package ecosystem. By configuring Dependabot through the interval daily in the configuration file, you ensure timely awareness of any updates. This integration not only simplifies the tracking of changes but also enhances collaboration and communication among team members. For a detailed guide on setting up Dependabot with GitHub Actions and enabling Slack notifications, refer to the comprehensive documentation available on GitHub Docs. Just a click to enable, and you'll be effortlessly keeping pace with the latest version updates, promoting a more secure and efficient development environment. References: [1] Sending messages using Incoming Webhooks [2] Notifies via Slack about pull requests which are ready to review [3] Action for checking out a repo [4] A GitHub action that sends Dependabot Vulnerability Alerts to Slack [5] Create a personal access token for GitHub [6] GitHub event types [7] Events that trigger workflows Ignacio Rubio DevOps Engineer Teracloud If you want to know more about Github, we suggest checking GitHub Actions without AWS credentials If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Terraform Workspaces

    As we know, when we deploy Infrastructure, it’s prevalent that we need many and different environments, like development, testing, staging, or production. Terraform allows us to write a scalable and sustainable infrastructure in those different environments. The need that arises with this is, How can I reuse my code efficiently? There are many ways to accomplish this, but today we are going to focus on Terraform Workspaces. One of the best advantages that we can achieve using Workspaces is handling the state files independently. This file represents a “photo” of our infrastructure and Terraform uses it to detect what resource will change or delete in an execution. By default, Terraform has one workspace called default. It’s probably that if you don’t know about workspaces, you have always worked with them. When we explore this command and their sub-commands, we found: • terraform workspace new Create a new workspace • terraform workspace select Select a workspace • terraform workspace list List available workspaces • terraform workspace show Show the current workspace • terraform workspace delete Delete a workspace States files separate When a workspace is created and settings are applied, terraform creates a directory called terraform.tfstate.d and in it a subdirectory for each environment containing the respective tfstate file. *As a note, remember always save the tfstae file in a backend S3 bucket and block it through the use of DynamoBD tables. Receive Our News! What about the Code reuse? Regarding to this, we can use conditional structure to create different types and amounts of infrastructure using the same code. condition ? true_val : false_val This can be applied as follows, using the count attribute: resource "aws_eip" "example" { count = "${var.create_eip ? 1 : 0}" instance = "" } As you can see, the resource “aws_eip” only will be created if the boolean value assigned to “var.create_eip” is set to true (1). Although Terraform doesn’t support IF statements, this is a very simple way to handle it and shows how to using the count attribute allows us to create a dynamic configuration. Also, we can define different types of “variable files” as input for the different environments. terraform plan -var-file="prod.tfvars” terraform apply -var-file="prod.tfvars” An example of .tfvars is define variables only used in one of those environments, as the following: instance_type = "t3.large" ami = "ami-09d3b3274b6c5d4aa" cidr_vpc = "" We can conclude that this is a great way to reuse the infrastructure code, separating their respective state files and accomplishing one of the AWS Well Architected Framework Pillars, Operational Excellence🙌. Bonus track In some cases, maybe you don't want to work with the default workspace. To achieve this, we can trick terraform by defining an environment variable like so: environment = terraform.workspace == "default" ? "name_main_environment" : terraform.workspace As you can see, the variable environment takes the value of your main environment (e.g production), if the terraform.workspace is equal to default value. Otherwise, the value of the environment will be the name of the workspace that you have selected. Martín Carletti DevOps Teracloud If you want to know more about Terraform, we suggest going check Importing resources from the Gcloud to IaaC in Terraform If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Best Security Practices, Well-Architected Framework

    Understanding good practices is essential to do a job well and above all it is even more important if we talk about security issues, protecting our data and creating architectures that are highly performing is an essential task. Today we are going to see the second pillar of the Well-architecture Framework, which is the security pillar. Basically, what this security pillar tells us is how to take advantage of cloud computing technologies to protect our data, to protect our systems and all the assets that we are going to have on our platform using the different tools that the cloud provides, at our disposal and making use of all good practices. Why is security important in our architecture? Beyond everything, the security world is priority number one, number 2 and number 3, it is important because if we want to gain trust in our clients; our clients can be internal clients or external clients. Internal client can be part of the organization that is supported by the services that are managed by the company, as well as the external clients that consume the services that the organization provides, not necessarily the organization, it has to be an organization that provides services through a page or an application, but it can also be an organization that provides other types of services, but that is supported by systems that the organization deployed in the cloud. On the other hand, we also have another point that is very important, which are the legal requirements or regulations that we as an organization must properly comply with, we must have the appropriate controls, and that we have to have an appropriate architecture design to comply with those regulations. Design principles on which the security pillar is based Implement a solid security foundation, strong identity foundation. Apply concepts of segregation of tasks with automation, centralize the administration of the identity of our users. Enable traceability, we must monitor, audit the actions in the field and the changes in our environments in real time, in this way we will be collecting the records of the changes that have been made through different services. Applying security at all levels, we have to defend ourselves everywhere, don’t have to think that security is something that is simply at the final barrier, but that it must be applied to all that it is, it has to be thought of in a profound way. Automate recommended practices, the idea is to create architectures that are scalable that are secure and have traceability as far as possible. Protecting our data in transit, basically means classifying the information into different levels of sensitivity, being able to use mechanisms to protect and encrypt it, and having good access control. Keep distances from data, normally we are used to creating our infrastructures and making information available to everyone, when you have sensitive data it is not so good to be so flexible with access to information. Mechanisms and tools need to be implemented that distance people from having to access data in order to eliminate any risk of data leakage. Prepare for security events, basically prepare for the worst, for anything that might happen; because when you create an application, when you create a platform, you think about what value it is going to give to the client, but you never think about what happens if someone hacks it, if my application breaks or if it has any security problem. What they can do for this is to run simulations of how they respond to incidents, of how to prepare the infrastructures to protect themselves, to detect what the problem has been that caused them to be able to investigate it and be able to recover. This is the reason why, one of the main concerns of a company when deciding to move part or all of its computing and data management resources to a cloud computer services, is security. Receive Our News! With Cloud Security you can protect the integrity of cloud-based applications, data, and virtual infrastructure because cybersecurity attackers can exploit security vulnerabilities, using stolen credentials or compromised applications to carry out attacks, interrupt services or steal confidential data of network security. One of the best solutions is to automate the security of your operation, one of Teracloud´s specialties. At Teracloud we take care to understand your business, risks, and processes to help you transform, keep your data free of risk, and your company safe and running. Does your company have any security vulnerabilities? Contact us! Damian Olguin Founder and CTO Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • `S3 Website + Cloudfront CDN with Authentication via AWS Cognito

    Sometimes we need to protect our website (or part of it) from unauthorized access, this can be tricky because we need to think of a custom way of authentication module, or well, a third-party platform to integrate with our system, we also need to be concerned about the availability and performance of this service as it grows. In this Teratip we will discover a new way of deploying our web static content to a high-availability service such as AWS S3, using Cloudfront as CDN that helps you to distribute your content quickly and reliably with high speed. As mentioned before, we need to protect from unauthorized access, so we will implement AWS Cognito as an authentication service, using JWT for session management via AWS Lambda. High Availability Website To begin, we will define if we want to host a new S3 website or use an existing website, we can deploy our static web content to an S3 private bucket and access it via Cloudfront using OAI, our Terraform module will allow you to set your domain and aliases and then will create the Cloudfront Distribution, S3 Bucket, and even SSL Certificates (using Amazon Certificate Manager). Authentication Process Good, we have our highly available website already running but we can notice that anyone can access it, but don’t worry because our module will create a new Authentication process that will be triggered by Cloudfront when a user request access to our website. Lambda@Edge Lambda@Edge lets you run AWS Lambda functions in an AWS location close to your customer in response to CloudFront events. Your code can be triggered by Amazon CloudFront events such as requests for content by viewers or requests from CloudFront to origin servers. You can use Lambda functions to change CloudFront requests and responses at the following points: Receive Our News! JWT Authentication JSON Web Token (JWT) is a JSON-based open standard for creating access tokens that assert a series of claims as a JSON object. There are several benefits to using Lambda@Edge for authorization operations. First, performance is improved by running the authorization function using Lambda@Edge closest to the viewer, reducing latency and response time to the viewer request. The load on your origin servers is also reduced by offloading CPU-intensive operations such as verification of JSON Web Token (JWT) signatures. We will implement the JWT authentication process via Lambda functions using NodeJS 14.x in this case. Amazon Cognito Well, now we will talk about AWS Cognito, Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Also scales to millions of users and supports sign-in with social identity providers, such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0 and OpenID Connect. So using this service our authentication process will look as follows: Our Terraform module will create the Cognito user pool for us and add the Cognito Login URL to the Cloudfront response in case we don’t are authenticated. Then, we can sign-up new users via Cognito login URL, or better, we can access to Cognito service in our AWS account and manage users, add new authentication providers, change password policies, and a lot of other things. Finally, our architecture will work as follows: Access for the first time without an authenticated user: Once we authenticate with a Cognito user: PoC Terraform module This solution is easy to deploy because we build a Terraform module that, with a few variables, can deploy the entire infrastructure for us, below is an example that creates the website, a Cognito user pool, and finally uploads an index.html to the S3 bucket to check if after authentication we can access to the website. module "cloudfront-s3-cognito" { source="" #Region which S3 website and Cognito services will be stored - default is us-east-1 region = "us-west-2" #Name of the service that will be used as a subdomain for your website service = "cloudfront-s3-cognauth" #Name of the domain that will be used in web URL (this domain must exist in Route53) domain = "" #The name of the Lambda function that will handle the JWT Authentication process lambda_name = "cognito-jwt-auth" #If you want to create an index.html for S3 website this variable must be untagged index_example = true #In order to add a logo to Cognito login page you need to set the path/filename logo = "logo.png" } Conclusion That's all you need! With a few lines of Terraform, we've created a Frontend application in S3 with a CDN as Cloudflare, SSL certificates, and authentication mechanisms that protect it. Protecting frontend and even backend code has never been easier, and doing so at an infrastructure level enables you to let your apps focus on just what they ought to. References AWS blog on validating JWTs with Lambda@Edge: Sample code to decode and validate JWTs in python and typescript: Authorization with Lambda@Edge and JSON Web Tokens (JWTs): Accessing cookies in Lambda@Edge: Nicolas Balmaceda DevOps Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

bottom of page