top of page

148 items found for ""

  • Terraform Conditionals and Loops: Some Terraform hacks that you should know

    LEVEL: BASIC Did you ever find yourself coding Terraform and suddenly have to create multiple resources dynamically or based on a condition without knowing how? This TeraTip brings you the answers to some of those questions. Let’s get started! Conditionals Our first mission is to learn how to create a Terraform resource conditionally. Let’s say you want to create an Autoscaling Group conditionally based on a variable (for example, indicating if the ASG is enabled or if the environment is production, etc). For this we should use the ternary operator available on Terraform. The ternary operator looks like this: condition ? this if the condition is true : this if the condition is false With this expression we’re able to set attributes conditionally or even create resources conditionally. How would you use this for creating a resource conditionally? Take a look at the following: As you can see, we have the special count attribute on top of the resource, and its value is defined based on the var.asg_is_enabled variable (which is a bool by the way). So, if the asg_is_enabled variable is true, we’re going to create the AutoScaling Group, but, if it’s false we aren’t going to create the ASG. Take a look at line 6. I left an example on how you can use conditionals for setting the attributes of the objects dynamically. Terraform’s for_each meta-argument When working with big projects we have to create a lot of resources using Terraform and a lot of them are created multiple times but with a different configuration for each one. One way of doing this is by creating the same resources multiple times but with different names. However, this reduces the maintainability of the code, making it more difficult to read, and more easy to make a mistake (for example you have to change all the EC2 Instances resources but you miss one). For this, Terraform gives us the for_each attribute. The for_each attribute creates an X amount of resources based on an argument, and it's commonly used for creating multiple resources with different configurations. Check out the example below. In the example above we have 3 instances being created, each one with its own configuration. Right now it looks very small, but imagine if you have to configure A LOT more parameters for the instances or even if you have to create 10 instances instead of 3. The code gets big very quickly, and this is something that negatively affects the readability and maintainability of the code. On the other hand, when using for_each we’ll ALWAYS have the same amount of lines,and instead we’ll add more instances just adding more objects to our var.instances variable. What we’re doing is defining a var.instances variable as an object list. This object list contains one object per instance that we want to create, and each object contains the particular configuration for the instance. This is what your instances variable will look like. In the future if you want to add a new instance, just add a new object to the list along with its attributes and Terraform will handle the creation of this new instance ;) This is you after starting using for_each and conditionals Hope you liked this TeraTip! Stay tuned for more Terraform Tricks that will make your life easier. Juan Wiggenhauser Cloud Engineer Teracloud If you want to know more about Terraform, we suggest going check How to Restore a Previous Version of Terraform State If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • What is customer engagement?

    Customer engagement is the ongoing cultivation of a relationship between the company and consumer which goes far beyond the transaction. It's an intentional and consistent approach by a company that provides value at every customer interaction, thus increasing loyalty. There are many ways to engage customers. Providing excellent customer service Creating valuable content Offering loyalty programs Hosting events Using social media Personalizing the customer experience The key to customer engagement is to understand what your customers want and need. Once you know what they are looking for, you can create experiences that will meet their needs and exceed their expectations. What is the goal of customer engagement? The goal of customer engagement is to offer customers something of value beyond your products and services. High-quality products/services initially attract customers; relevant content is what keeps them around. Marketers do this through a strategy known as customer engagement marketing. What are customer engagement skills? Customer engagement skills are the abilities to connect with customers and build relationships with them. They are essential for any business that wants to succeed. There are many different customer engagement skills, but some of the most important include: Active listening: paying attention to what the customer is saying, both verbally and nonverbally. It also means asking questions to clarify what you are hearing and to show that you are interested. Empathy: understand and share the feelings of another person. It is important to be able to put yourself in someone else's shoes and see things from their perspective. Communication: share your thoughts and ideas in a clear and concise way. It is also important to be able to listen to and understand the thoughts and ideas of others. Problem-solving: identify and solve problems. It is important to be able to think critically and creatively to come up with solutions that work for everyone involved. Adaptability: change and adapt to new situations. It is important to be able to roll with the punches and to be flexible in your thinking. Patience: wait calmly and without complaining. It is important to be able to deal with difficult customers and to stay calm under pressure. Professionalism: act in a way that is appropriate and respectful. It is important to be able to dress and act in a way that makes a good impression on customers. Positive attitude: stay optimistic and upbeat. It is important to be able to project a positive image and to make customers feel good about doing business with you. Customer engagement skills can be learned and improved with practice. There are many resources available to help you develop your customer engagement skills, such as books, articles, and online courses. Here are some extra tips to improve your customer engagement skills: Be present: When you are interacting with a customer, make eye contact, listen attentively, and avoid distractions. Be respectful: Show that you value the customer's time and that you are interested in helping them. Be open-minded: Be willing to listen to the customer's concerns and to try to find solutions that work for them. Be positive: Be enthusiastic and encouraging. Be yourself: Don't try to be someone you're not. Customers can tell when you're being fake, and it will make it harder to connect with them. By developing your customer engagement skills, you can improve your customer relationships and build a more successful business. How do we deliver customer engagement from Teracloud´s Service Delivery Team? Teracloud’s Service Delivery Team is available 24/7 to answer customer questions and resolve issues. We stay proactive by reaching out to customers to ensure they’re satisfied with our services. We understand that by engaging customers in ways that encourage them to help themselves and one another, by being clear about what they can expect from the company's services, and by offering them options that will enable them to achieve their desired outcomes, leaders can respond to today's challenges and position themselves for future success. Customer engagement may come off as simple at first sight but it’s a complex and essential process for business to succeed. By understanding the importance of customer engagement and implementing strategies to continuously improve it, businesses can reap the many benefits much sooner than later. Elisa Canale Service Delivery Analyst Teracloud If you want to know more about our Business Commitment, we suggest going check What are we talking about when we talk about service delivery? If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Restore or Download a Previous Version of Terraform State

    LEVEL: INTERMEDIATE Introduction In the event of accidental destruction of resources, it may be necessary to restore a previous version of the Terraform state. If the backend bucket that stores the Terraform state has versioning enabled, you can easily download and restore an earlier version of the state file. This document assumes that you are storing your Terraform state in an S3 bucket with versioning enabled. Steps to download a Previous Version of Terraform State Follow the steps below to restore a previous version of Terraform state from a backend bucket with versioning enabled: Identify the version of the state file that you want to restore by navigating to the version history in your backend bucket. The version history will display a list of all the previous versions of the state file that have been uploaded to the bucket. aws s3api list-object-versions --bucket my-tf-state-bucket --prefix my-state-file.tfstate Download the desired version of the state file from the bucket to your local machine. You can do this by clicking on the version number in the version history and selecting "Download". aws s3api get-object --bucket my-tf-state-bucket --key my-state-file.tfstate --version-id ABC1234 --output my-state-file.tfstate Replace the current state file with the downloaded version. Open the backend configuration file, usually named backend.tf, and find the section that specifies the path to the state file. Replace the name of the file in the specified path with the name of the downloaded file. terraform { backend "s3" { bucket = "my-tf-state-bucket" key = "my-state-file.tfstate" region = "my-region" # replace the state file name with the name of the downloaded file key = "my-downloaded-state-file.tfstate" } } Verify the restored state by running terraform plan. This command will show you the differences between the restored state and the current state. terraform plan Apply the restored state by running terraform apply. This command will apply the restored state to the infrastructure and restore any previously destroyed resources. terraform apply By following these steps, you can easily restore a previous version of Terraform state from a backend bucket with versioning enabled. Conclusion Restoring a previous version of Terraform state can save you from the pain of rebuilding an entire infrastructure from scratch. With the versioning feature of Terraform backend buckets, this process is simple and straightforward. Martín Carletti Cloud Engineer Teracloud If you want to know more about our Tips, we suggest checking Two tools for handling obsolete APIs in k8s If you are interested in learning more about our #TeraTips or our blog content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Why you should use IMDSv2

    LEVEL: BASIC It's essential to understand the potential risks associated with using various services and configurations within cloud environments. One configuration that may pose a risk is the use of IMDSv1 with Amazon Web Services (AWS) Elastic Compute Cloud (EC2). In this post, we'll discuss why it's dangerous to use IMDSv1 with AWS EC2 and why we should use IMDSv2 instead. The Instance Metadata Service (IMDS) is a service provided by AWS that allows EC2 instances to access metadata about themselves, such as their instance ID, security groups, and IAM role. IMDSv1 is dangerous to use with AWS EC2 because it lacks any built-in security features. The metadata endpoint is publicly accessible, which means that anyone who can reach the EC2 instance can potentially access the metadata. This can be a significant risk if an attacker gains access to the EC2 instance. Suppose an attacker gains access to an EC2 instance that has access to a secret key stored in the instance metadata. They can then use this secret key to access other resources in the account, potentially leading to a full compromise of the environment. This is why it's essential to take measures to protect the metadata endpoint. Example: [@ip-xx-xx-xx-xx ~]$ curl -XGET http://169.254.169.254/latest/meta-data/iam/security-credentials/ { "Code" : "Success", "LastUpdated" : "2023-02-27T16:46:31Z", "Type" : "AWS-HMAC", "AccessKeyId" : "super_secret_access_key_id", "SecretAccessKey" : "super_secret_access_key", "Expiration" : "2023-02-27T22:48:51Z" } IMDSv2 is the second version of the IMDS that was introduced in 2019. IMDSv2 provides several security features that make it safer to use with AWS EC2 instances. IMDSv2 is safer to use with AWS EC2 instances because it provides several security features that make it more difficult for attackers to access sensitive data and resources. Encrypted Communications: IMDSv2 encrypts all communications between EC2 instances and the metadata endpoint, making it more difficult for attackers to intercept data. Enhanced IAM Integration: IMDSv2 integrates more closely with AWS IAM, allowing for more fine-grained control over access to the metadata endpoint. Session Authentication: IMDSv2 uses session authentication to ensure that requests to the metadata endpoint come from a valid session, making it more difficult for attackers to spoof requests. Time-bound Credentials: IMDSv2 provides time-bound credentials that can be used to access the metadata endpoint. These credentials automatically expire after a set period, reducing the risk of unauthorized access. We can make use of the metadata endpoint with IMDSV2 like this: TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \ && curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/ Adding another layer of defense, IMDSv2 will also not issue session tokens to any caller with an X-Forwarded-For header, which is effective at blocking unauthorized access due to misconfigurations like an open reverse proxy. Using IMDSv1 with AWS EC2 can be risky since it provides attackers with access to sensitive data and resources. By using IMDSv2, you can significantly reduce the risk of a compromise. Happy coding and see you in the Cloud! References: https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-proxies-ssrf-vulnerabilities-ec2-instance-metadata-service/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html Juan Bermudez Cloud Engineer Teracloud If you want to know more about Cloud Security, we suggest going check Best Security Practices, Well-Architected Framework If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Conftest: The path to more efficient and effective Kubernetes automated testing

    This TeraTip is to take our DevSecOps pipelines to the next level! We are going to make use of Conftest. What is Conftest? Conftest is a utility to help you write tests against structured configuration data. For instance, you could write tests for your Kubernetes configurations, Tekton pipeline definitions, Terraform code, Serverless configs or any other structured data. Conftest relies on the Rego language from Open Policy Agent for writing policies. If you're unsure what exactly a policy is, or unfamiliar with the Rego policy language, the Policy Language documentation provided by the Open Policy Agent documentation site is a great resource to read. We are going to make a brief demo to configure some rules on a Dockerfile. It's demo time! 1) Get familiar with the Rego language 2) Let's begin writing some rules for our Dockerfile. Execute the following touch opa-docker-security.rego Remember, don't forget the file extension, must be .rego 3) With your IDE of choice open the file and add the following rule package main # Do Not store secrets in ENV variables secrets_env = [ "passed", "password", "pass", "secret", "key", "access", "api_key", "apikey", "token", "tkn" ] deny[msg] { input[i].Cmd == "env" val := input[i].Value contains(lower(val[_]), secrets_env[_]) msg = sprintf("Line %d: Potential secret in ENV key found: %s", [i, val]) } With this rule, we are checking for potential keys and sensible data within the Dockerfile. The list is secrets_env. 4) If you don't have a Dockerfile ready, then its time to create one. touch Dockerfile And write some content into it. FROM adoptopenjdk/openjdk8:alpine-slim WORKDIR /app EXPOSE 8080 ARG ${JAR_FILE}=/app.jar ENV tera-secret="secret" RUN sudo apt-get upgrade RUN curl https://www.teracloud.io/ COPY ${JAR_FILE} /app.jar ENTRYPOINT ["java","-jar","/app.jar"] 5) With our Dockerfile and our OPA rules defined we can proceed to the scan. Execute the following. docker run --rm -v $(pwd):/project openpolicyagent/conftest test --policy opa docker-security.rego Dockerfile If you don't have the image locally it will pull it automatically. Make sure you run this command in the same directory where your Dockerfile lives. The output shows. Since our Dockerfile is not using the best practices, let's add some more rules in the next step. 6) Add more OPA rules! Copy and paste the following on our .rego file # Do not use 'latest' tag for base imagedeny[msg] deny[msg] { input[i].Cmd == "from" val := split(input[i].Value[0], ":") contains(lower(val[1]), "latest") msg = sprintf("Line %d: do not use 'latest' tag for base images", [i]) } # Avoid curl bashing deny[msg] { input[i].Cmd == "run" val := concat(" ", input[i].Value) matches := regex.find_n("(curl|wget)[^|^>]*[|>]", lower(val), -1) count(matches) > 0 msg = sprintf("Line %d: Avoid curl bashing", [i]) } # Do not upgrade your system packages upgrade_commands = [ "apk upgrade", "apt-get upgrade", "dist-upgrade", ] deny[msg] { input[i].Cmd == "run" val := concat(" ", input[i].Value) contains(val, upgrade_commands[_]) msg = sprintf("Line: %d: Do not upgrade your system packages", [i]) } # Do not use ADD if possible deny[msg] { input[i].Cmd == "add" msg = sprintf("Line %d: Use COPY instead of ADD", [i]) } # Any user... any_user { input[i].Cmd == "user" } deny[msg] { not any_user msg = "Do not run as root, use USER instead" } # ... but do not root forbidden_users = [ "root", "toor", "0" ] deny[msg] { input[i].Cmd == "user" val := input[i].Value contains(lower(val[_]), forbidden_users[_]) msg = sprintf("Line %d: Do not run as root: %s", [i, val]) } # Do not sudo deny[msg] { input[i].Cmd == "run" val := concat(" ", input[i].Value) contains(lower(val), "sudo") msg = sprintf("Line %d: Do not use 'sudo' command", [i]) } 7) Alright! now run again the Conftest. The output shows: We got some failures! remediate them on the next step. 8) Make your Dockerfile compliant with your established rules! (Try it out without seeing the solution). New Dockerfile. FROM adoptopenjdk/openjdk8:alpine-slim WORKDIR /app EXPOSE 8080 ARG JAR_FILE=target/*.jar RUN addgroup -S pipeline && adduser -S sec-pipeline -G pipeline COPY ${JAR_FILE} /home/sec-pipeline/app.jar USER sec-pipeline ENTRYPOINT ["java","-jar","/home/k8s-pipeline/app.jar"] The output. Awesome! We successfully added OPA rules for our Dockerfile! Now, our Dockerfiles are going to be more secure and will follow the standards we established. References https://docs.docker.com/develop/develop-images/dockerfile_best- practices/ https://www.openpolicyagent.org/docs/latest/policy-language/ https://www.conftest.dev/ Tomás Torales Cloud Engineer Teracloud If you want to know more about Kubernetes, we suggest going check Enhance your Kubernetes security by leveraging KubeSec

  • Modernize your business—before your competitors take the lead. Migrate to the Cloud with Teracloud

    Managing Cloud adoption has become companies’ top priority for the last few years. This tendency keeps on growing. It’s estimated that by 2025 over 95% of new digital workloads will be deployed on cloud-native platforms. Organizations acknowledge that moving to the Cloud mainly allows them to achieve greater flexibility, increase scalability, get significant cost savings, respond quickly to business opportunities, innovate faster, pursue potential gains in operational efficiency, and be more competitive. Companies across all industries want to migrate to the cloud for all sorts of reasons, but if they’re clear about what their actual requirements are, it’ll be easier to elaborate a migration strategy and prepare a migration plan. They need to be clear about what cloud services adoption is all about. It involves moving data, applications, and other IT resources from on-premise data centers and servers to the cloud. Furthermore, they could either move data to public cloud service providers (like Amazon Web Services - AWS -), set up their private cloud computing environment, or create a hybrid environment. Organizations have the flexibility to choose which migration approach is best for their specific needs. The most common migration needs are: Draining of data centers and moving into the cloud completely Migrating to the cloud for a certain period of time to update data center infrastructure Lifting a specific application from on-premise to a cloud environment Saving costs and deploying a hybrid environment At Teracloud, the first step we take for a migration plan is to identify the applications and technologies your organization has. Once this process is done, we evaluate the pros and cons of migration and define a set of priorities according to the conclusions of our analysis. Then we suggest the best strategy for migrating your workloads and technologies to the cloud. What migration strategies does Teracloud offer? As AWS Advanced Partners, we like to follow the different migration strategies offered by AWS; these help us categorize the workloads and technologies that need to be migrated: the 7R Framework: Rehost Replatform Repurchase Refactor Retire Retain Relocate. Let’s have a brief look at what each of these implies. Rehost Companies can lift-and-shift (rehost), choosing to move their entire application with some minor changes - full migration - where the attended need focuses on high scalability solutions for business reasons. Re-platform With a re-platform or lift-tinker-and-shift strategy, cloud (or other) optimizations can be done to achieve tangible benefits, but won't change the core architecture of the application. Re-purchase If a company just wants to switch from one product to another (choosing a SaaS platform), the proper strategy is re-purchase. Re-factor Refactor involves re-imagining and renewing the architecture and development of an application using cloud-native features; Retire This strategy is deployed for decommissioning and disabling an application. Retain On the other hand, organizations considering few applications to continue running on-premise, and would rather leave things as they are, AWS calls this the retain strategy. Relocate Last but not least, relocate or hypervisor-level lift and shift strategy solution is based on moving the infrastructure to the cloud without purchasing new hardware. This migration scenario is specific to VMware Cloud on AWS. If you’re currently facing the idea of Migrating to the Cloud, know that multiple variables impact your Migration Strategy. Contact us to help you to decide which strategy fits your business objectives. Build a modern business on AWS with Teracloud Victoria Vélez Funes SEM and SEO Specialist Teracloud If you want to know more about how we can help you, we suggest going check how we helped Axiom Cloud with their Cloud Strategy A Refrigeration Management Software that grows in the cloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Self Managed ArgoCD: Wait, ArgoCD can manage itself?

    LEVEL: INTERMEDIATE The answer is yes, ArgoCD can manage itself. But how? You may ask. Then, read this TeraTip to know how you can setup your ArgoCD to manage itself. First of all, what is ArgoCD? ArgoCD is a GitOps Continuous Delivery tool for Kubernetes, and it can help you by managing all your cluster resources constantly comparing their state on the cluster against the repositories of those resources. First of all we will create a Minikube cluster for this PoC minikube start Once we got our cluster running let’s proceed installing ArgoCD on the argocd namespace using Helm and the official helm chart from the Argo project. helm repo add argo https://argoproj.github.io/argo-helm helm install argo argo/argo-cd -n argocd Now, it's time to implement something known as the App of Apps pattern. The App of Apps pattern consists in having an ArgoCD Application which consists of other ArgoCD Applications. You can take this repository as an example: https://github.com/JuanWigg/self-managed-argo Basically here we have a main application, which is called applications. This main application will synchronize with our self-managed-argo repo, and, on this repo we have all of our other ArgoCD applications, for example a kube-prometheus stack, core applications, elastic search, and so on, but the most important thing is that we have an application for ArgoCD itself. The main applications looks something like this: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: applications namespace: argocd spec: project: default destination: namespace: default server: https://kubernetes.default.svc source: repoURL: https://github.com/JuanWigg/self-managed-argo targetRevision: HEAD path: applications syncPolicy: automated: prune: false # Specifies if resources should be pruned during auto-syncing ( false by default ). selfHeal: true # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ). allowEmpty: false # Allows deleting all application resources during automatic syncing ( false by default ). syncOptions: - CreateNamespace=true As you see, the path for the application is applications. We have that same folder on our repo, where we have all the applications ArgoCD it’s going to manage (including itself). Just as an example i will leave the ArgoCD application code here: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: argocd namespace: argocd spec: project: default destination: namespace: argocd server: https://kubernetes.default.svc source: chart: argo-cd repoURL: https://argoproj.github.io/argo-helm targetRevision: 5.27.1 helm: releaseName: argo syncPolicy: automated: prune: false # Specifies if resources should be pruned during auto-syncing ( false by default ). selfHeal: true # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ). allowEmpty: false # Allows deleting all application resources during automatic syncing ( false by default ). Make sure that the version you put on the application matches the version you deployed early with Helm. Lastly, you need to apply the main application on the cluster using: kubectl apply -f applications.yaml And there you have it! Now you have ArgoCD managing itself and all your applications in your cluster! Juan Wiggenhauser Cloud Engineer Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Prevent (and save money in the process) Security Hub findings related to old ECR images scanned

    Checking Security Hub after setting it up, I found a ton of findings related to old ECR images I had in my repo. If you never did it, the moment is now, and if you are starting to create your ECR repo, you better implement this! As we know, creating an ECR repo in terraform it’s as simple as: resource "aws_ecr_repository" "ecr" { name = “my-testing-repo” image_scanning_configuration { scan_on_push = true } } You provide a name for the repo and choose to scan your images every time you push a new one to the repo. This way you add a last security check to find vulnerabilities in the docker you will deploy. But, if you don’t provide a lifecycle policy for the images in the repo you will be storing outdated images and increasing your bills! You can delete old images based on how long they've been in your repository, or limit the number of images to a number that works for you. In terraform: resource "aws_ecr_lifecycle_policy" "foopolicy" { repository = aws_ecr_repository.ecr.name policy = file("${path.module}/ecr_lifecycle.json") } The policy will have the following format: { "rules": [ { "rulePriority": integer, "description": "string", "selection": { "tagStatus": "tagged"|"untagged"|"any", "tagPrefixList": list, "countType": "imageCountMoreThan"|"sinceImagePushed", "countUnit": "string", "countNumber": integer }, "action": { "type": "expire" } } ] } If the image is untagged or you choose any for tagStatus, the tagPrefixList parameter is not needed. If countType is set to imageCountMoreThan, you also specify countNumber to create a rule that sets a limit on the number of images that exist in your repository. { "rules": [ { "rulePriority": 1, "description": "Keep last 4 images", "selection": { "tagStatus": "any", "countType": "imageCountMoreThan", "countNumber": 4 }, "action": { "type": "expire" } } ] } If countType is set to sinceImagePushed, you also specify countUnit and countNumber to specify a time limit on the images that exist in your repository. { "rules": [ { "rulePriority": 1, "description": "Expire images older than 14 days", "selection": { "tagStatus": "untagged", "countType": "sinceImagePushed", "countUnit": "days", "countNumber": 14 }, "action": { "type": "expire" } } ] } Lourdes Dorado Cloud Engineer Teracloud If you want to know more about Cost Optimization, we suggest going check Cost Optimization on AWS: 10 Tips to Save Money If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • The Importance of Cloud Security for the Finance Sector

    Financial institutions of all sizes increasingly view services provided by cloud service providers as an important component of their technology program, and cloud adoption could represent a significant change to financial institutions’ internal operations and delivery of services to clients and customers. In the finance sector, where sensitive information such as financial records, customer data, and transaction details are stored online, the risk of cyber threats is high. This blog post will discuss the importance of cloud security for the finance sector and how it can benefit businesses. The finance sector is heavily regulated, and businesses must comply with strict security standards and protocols to ensure the safety of their customers' personal and financial information. Cloud security provides a secure environment for storing sensitive data and ensures compliance with regulations such as GDPR, PCI DSS, and HIPAA. Failing to comply with these regulations can result in hefty fines and legal action against businesses. Protection Against Cyber Threats The finance sector is a prime target for cybercriminals due to the high-value assets held by financial institutions. Cyber threats such as hacking, phishing, and malware attacks can cause significant damage to businesses, including financial loss, reputational damage, and loss of customer trust. Cloud security provides an additional layer of protection against these threats by implementing security measures such as data encryption, access controls, and regular security audits. Data protection is the first step for an adequate Cloud Security Strategy. In Teracloud we reinforce security and protect businesses. By implementing best practices and the correct security strategy you will be able to: Save time and reduce costs by speeding up processes with the automation of operations. Have control over information and account reports safely and securely. Enhance data privacy by optimizing and applying multiple layers of security. Cloud security not only provides enhanced protection against cyber threats but also offers cost savings and scalability benefits. Cloud infrastructure eliminates the need for businesses to invest in expensive hardware and software, saving them money on infrastructure costs. Also provide businesses with the flexibility to scale their operations up or down depending on their needs, allowing for growth and agility in the finance sector. Larger investment advisors, investment companies, and broker-dealers are adopting cloud computing services to scale operations, build for business continuity needs, and launch products more quickly to market. Some firms started natively in the cloud and have built their entire technology stack in the cloud. Other firms are either in the process of preparing to move to the cloud, piloting workloads in the cloud, or scaling operations in the cloud, typically in an incremental fashion. Still, others have yet to transition to the cloud significantly and are taking a wait-and-see approach to gain additional information as cloud computing matures. The idea is to drive business results, with fewer resources dedicated to infrastructure support. Thus, will improve the security of the infrastructure, and prevent the business from losing money, in addition, will gain governance and control over the data, it will comply with the regulations of the financial industry and, most importantly, will build trust with customers. A success story from a client that implemented Security Solutions in the Cloud with Teracloud Due to the nature of the wealth management industry security is a top priority, on top of that Pulse360 needed to know where they were standing regarding industry best practices before their new platform goes to production public/use. ​Pulse360 wanted to have an experienced AWS partner with a strong background in Security to review their infrastructure and help to build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Teracloud’s knowledge of security and also experience in helping all kinds of startups to scale their business in a healthy and efficient way was crucial in their decision. If you want to know the complete solution, we invite you to enter the following link How Teracloud is helping Pulse360 revolutionize the wealth management industry Improving the infrastructure security will avoid your business from losing money, plus, you will gain governance and control over your data, comply with financial industry regulations, and most importantly, you will build trust with your customers. Let's talk about preventing hackers from stealing confidential data of your financial organization so we can explore if this would be something valuable to incorporate into your workflow, we can help you! Liliana Medina Marketing Manager Teracloud If you want to know more about Security, we suggest going check What did AWS Re: Invent bring us in terms of Security? If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Enhance your Kubernetes security by leveraging KubeSec

    Kubesec is an open-source Kubernetes security scanner and analysis tool. It scans your Kubernetes cluster for common exploitable risks such as privileged capabilities and provides a severity score for each found vulnerability. Security risk analysis for Kubernetes resources. • Take in a single YAML file as input. • One YAML can connect multiple Kubernetes resources. Kubesec is available as: • Docker container image at docker.io/kubesec/kubesec:v2 • Linux/MacOS/Win binary (get the latest release) • Kubernetes Admission Controller • Kubectl plugin Keep your cluster secure and follow me on a brief demo! First things first, we are going to define a bash script which is going to perform the scans on our yaml file by calling the KubeSec API. 1) Execute touch kubesec-scan.sh 2) Create our risky deployment! execute another touch command as follows: touch insecure-deployment.yaml Then, paste the following content (make sure you are using your image, it also can be a testing one. e.g public.ecr.aws/docker/library/node:slim): apiVersion: apps/v1 kind: Deployment metadata: labels: app: devsecops name: devsecops spec: replicas: 2 selector: matchLabels: app: devsecops strategy: {} template: metadata: labels: app: devsecops spec: volumes: - name: vol emptyDir: {} containers: - image: replace name: devsecops-container volumeMounts: - mountPath: /tmp name: vol 3) Back to our bash script, define some variables for a later usage, here we are going to make use of the KubeSec API. Open the newly created file with your preferred text editor and paste the following: #!/bin/bash # KubeSec v2 api scan_result=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan) scan_message=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan | jq .[0 scan_score=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan | jq .[0]. # Kubesec scan result processing # echo "Scan Score : $scan_result" 3) Alright! in the previous step we made some interesting calls to the KubeSec API, the first variable output is a big JSON object (we can see it if we uncomment the echo at the end of the script). Then, on the next two variables (since it's a json object) we are using jq CLI, a powerful and lightweight command-line JSON processor. Making use of it, we extract the scanning message and score. 4) We continue with the script edition, now it's time to log some exciting stuff! Add the following to our bash script: if [[ "${scan_score}" -ge 5 ]]; then echo "Score is $scan_score" echo "Kubesec Scan $scan_message" else echo "Score is $scan_score, which is less than or equal to 5." echo "Scanning Kubernetes Resource has Failed" exit 1; fi; This last section of the script is a basic bash conditional were we are checking the scan_score variable. If its greater or equal than 5 then is going to “pass” our requirements. Otherwise is going to fail. Note: Choose some relevant score numbers based on your application requirements. This example is just for demo purposes and not meant to run on production environments. The final script will look like this: #!/bin/bash # KubeSec v2 api scan_result=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan) scan_message=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan | jq .[0].me scan_score=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan | jq .[0].scor # Kubesec scan result processing # echo "Scan Score : $scan_result" if [[ "${scan_score}" -ge 5 ]]; then echo "Score is $scan_score" echo "Kubesec Scan $scan_message" else echo "Score is $scan_score, which is less than or equal to 5." echo "Scanning Kubernetes Resource has Failed" exit 1; fi; Alternatively run it with docker as follows: #!/bin/bash scan_result=$(docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < insecure-deployment.yaml) scan_message=$(docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < insecure-deployment.yaml | jq .[].messag scan_score=$(docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < insecure-deployment.yaml | jq .[].score) 5) Time to see the power of the KubeSec scans! Execute the script. The output shows Which means… there are some security improvement opportunities. At this point, we begin to see the potentially integrations within our DevSecOps pipeline (see the extras section for a Jenkins example). 6) KubeSec made a good job by scanning our deployment. But… How do we leverage the security opportunities? If we go a few steps back, and we uncomment this line # echo "Scan Score : $scan_result" We are going to be able to see (under the scoring, advice section) an array of security items, their value in points and reason, among others. This is going to be a key component of our scans. Now we can take action. See the screenshot below. 7) Lets make some updates to our insecure deployment, under containers add the following: securityContext: runAsNonRoot: true runAsUser: 100 readOnlyRootFilesystem: true And under the spec section: serviceAccountName: default Finally, run the script once again and verify the new score. You should see something like this Awesome! with just a few steps we improve our kubernetes deployment security! Extra: Try integrating the solution on your DevSecOps pipeline! Below is an example on a Jenkinsfile pipeline { agent any stages { stage('Vulnerability Scan - Kubernetes') { steps { sh "bash kubesec-scan.sh" } } } } To read more about security best practices for Kubernetes deployments: https://kubernetes.io/blog/2016/08/security-best-practices-kubernetes-deployment/ References: https://kubesec.io/ https://www.jenkins.io/doc/book/pipeline/ Tomás Torales Cloud Engineer Teracloud If you want to know more about Security, we suggest checking Streamlining Security with Amazon Security Hub: A where to start Step-by-Step Guide

  • Taking advantage of Terraform’s dynamic blocks

    When using Terraform to create and maintain our infrastructure, sometimes we need to define different block properties for our environments. For example, let’s say we use the same module for creating AWS Cloudfront Distributions in our environments, but we only want to apply geographic restrictions to the production environment. To solve this problem we can use Terraform dynamic blocks. In order to apply geographic restrictions to an aws_cloudfront_distribution resource we need to define a restrictions configuration block in the resource as the following: restrictions { geo_restriction { restriction_type = "whitelist" locations = ["US", "CA", "GB", "DE"] } } If we wanted to apply this configuration only for the production environment we could do the following: dynamic "restrictions" { for_each = var.environment == "production" ? toset([1]) : toset([]) content { geo_restriction { restriction_type = "whitelist" locations = ["US", "CA", "GB", "DE"] } } } On the first line we are going to use a foreach statement to create this block only if the environment variable is set to “production”. Inside the content block, we are going to define all the previous properties that were defined on the restrictions block. With Terraform dynamic blocks we can customize our infrastructure creation and avoid creating the same configuration blocks for resources in our different environments. Happy coding and see you in the Cloud! :) Juan Bermudez Cloud Engineer Teracloud If you want to know more about Cloud Security, we suggest going check Streamlining Security with Amazon Security Hub: A where to start Step-by-Step Guide If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Cost Optimization on AWS: 10 Tips to Save Money

    AWS (Amazon Web Services) is a popular cloud computing platform that offers a wide range of services, including computing power, storage, and databases. While AWS can provide a great deal of flexibility and scalability for your business, it can also come with a significant cost if we don't pay attention or if we try to use the cloud like we used to do in the past with on-prem environments. However, there are many ways to reduce costs on AWS. here we have 10 tips for reducing costs on AWS. From taking advantage of the free tier, reserved instances, and monitoring resource usage with AWS Cost Explorer, these tips will help you optimize your spending on the platform. Utilize the free tier: AWS offers a free tier of service that includes a certain amount of usage for various services. Take advantage of this free usage to reduce costs. Right-size your instances: Choose the appropriate instance type for your workloads to avoid over-provisioning and paying for more resources than you need. Use reserved instances: By committing to using a certain amount of capacity for a period of time, you can save up to 75% compared to on-demand pricing. Use spot instances: Spot instances allow you to bid on spare Amazon EC2 capacity at a discounted price. Use Auto Scaling: Auto Scaling automatically increases or decreases the number of instances based on the demand for your application, helping you save money on unused capacity. Use Amazon Elastic Block Store (EBS) snapshots for backups: EBS snapshots can help you save money by allowing you to create point-in-time backups of your data, which can be used to restore data or launch new instances. Use Amazon S3 lifecycle policies: S3 lifecycle policies can help you save money by automatically moving data to lower-cost storage tiers as it ages. Use Amazon CloudWatch: CloudWatch allows you to monitor resource usage and set alarms to notify you when usage exceeds a certain threshold, so you can take action to reduce costs. Use AWS Trusted Advisor: Trusted Advisor analyzes your AWS environment and provides recommendations for cost optimization. Use AWS Cost Explorer: Cost Explorer allows you to track your usage and costs over time, so you can identify and eliminate unnecessary spending. In conclusion, AWS offers many cost-saving options. By utilizing the free tier, right-sizing instances, using reserved and spot instances, Auto Scaling, EBS snapshots, S3 lifecycle policies, CloudWatch, Trusted Advisor, and Cost Explorer, you can effectively reduce costs on AWS. If you need assistance with cost reduction or have any questions about how to reduce your AWS Billing, please don't hesitate to reach out for help. We will be more than happy to assist you and provide you with the information you need to make the most efficient use of your money. https://docs.aws.amazon.com/whitepapers/latest/how-aws-pricing-works/aws-cost-optimization.html Damian Gitto Olguín Co-Founder/CTO/AWS Hero Teracloud

bottom of page