top of page

151 items found for ""

  • Overview of the Community Day online

    Community Day events have been held in cities around the world. In August of 2020, it was held online, due to the situation we are going through worldwide. However, this did not affect this incredible event, where all the community leaders of Latin America came together to present technical debates, workshops, and practical laboratories that bring us closer and closer to the wonders that AWS Cloud offers. This provides a user-driven experience with AWS experts and industry leaders from around the world. Where we all acquire new and better knowledge through peer learning, this is very enriching, since it not only looks at the technical aspect of the solutions but also the human aspect that other people contribute when solving their challenges and a plus is that you register for free. In the past we had the opportunity to participate in the city of Buenos Aires, where the 2019 edition was held, there we had the honor and luck of being with our CTO Damián Olguín who gave a talk on "Automation as a code: From zero to ECS in minutes"; demonstrating the experience that Teracloud has acquired in automation and infrastructure for more than 200 clients. Keynote presenters for 2020 were Memo Döring (AWS Developer Relations LATAM) who has more than 12 years of experience working for technology companies; Sandy Rodríguez (CEO Certificate at SCRUM) woman leader of the community in Mexico and nothing more than the founder of the Community Ambassadors Cloud; Doris Manrique (Solutions Cloud Engineer, Soluciones Orion) Founder and leader of the AWS Girls Community and passionate about new technologies. The most relevant topics in this Community Day were, containers and Kubernetes, Machine Learning, Serverless with AWS Lambda + API Gateway, among others. The theme of our CTO Damian Olguín this year was "setting up your own Streaming channel with AWS Media Services + Amplify" in which we had an introduction to AWS Media Services, the project that is being carried out with Amplify and its demonstration. In this way, 6 hours of knowledge sharing were developed, directly from the leaders of the user community for free and totally in Spanish. There were also raffles for more than 10 scholarships to render certifications and more than 100 promotional credit coupons. In the same way, we show you the new AWS releases: 1. AWS Controllers for Kubernetes Preview: The AWS Controllers for Kubernetes (ACK) is a new tool that lets you define and use AWS service resources directly from Kubernetes. 2. Amazon Kinesis Data Streams announces two new API features to simplify consuming data from Kinesis Streams . 3. Application and classic load balancers are adding defense-in-depth with the introduction of Desync Mitigation mode: Application Load Balancer (ALB) and Classic Load Balancer (CLB) now support HTTP Desync Mitigation Mode and a new feature that protects your application from issues due to HTTP Desync. As we can see, AWS is constantly innovating and growing to offer a package of services that are necessary to help create sophisticated applications with greater flexibility, scalability, and reliability, which are super important in the DevOps world. At Teracloud as AWS Consulting Partner we love to support customer innovation and we like that the community is connected and we interactively grow and offer improvements to our customers. If you missed it, all the material is available on the official AWS twitch channel https://www.twitch.tv/videos/718196267 , and individual sessions will be uploaded to the community T witch channel https://www.twitch.tv/awscommunitylatam . We look forward to seeing you at an upcoming Latam Community Day celebration with more innovation and new services. To learn more about cloud computing, visit our blog for first-hand insights from our team. If you need an AWS-certified team to deploy, scale, or provision your IT resources to the cloud seamlessly, send us a message here .

  • Teracloud: Un viaje to the cloud and beyond

    Durante muchos años, estuve buscando el trabajo de mis sueños. En mis “cortos” 30 años, pase por diferentes trabajos, en rubros y empresas distintas, con variadisimos equipos de trabajo. Un día y en medio de plena pandemia, sin querer queriendo, me encontré con Teracloud. De a poco me empecé a adentrar en el mundo del Cloud Computing y lo enorme de AWS. Arranque conociendo la cultura DevOps. Conocí personas con una experiencia terrible y sobre todo, con ganas de crecer y con mucha pasión por lo que hacen. Me encontré con un equipo con muchas skills técnicas pero también con una capacidad increíble de superarse día a día. No hay un día en que esta enorme familia no esté aprendiendo nuevos conocimientos. No hay un día en el que alguien del team no te saque una sonrisa. Y más importante aún, no hay un minuto en el que no nos estemos divirtiendo. ¿La parte más linda? Cuando estamos todos juntos (de manera remota o físicamente en Córdoba Capiiiital). Y acá aprovecho para contarte sobre algunos de nuestros eventos más especiales que hacen a nuestro ADN cultural. La onboarding week : Esa primera semana, que coordinamos en Córdoba, en nuestras oficinas. ¿El objetivo más importante? Una buena dosis de la cultura de Teracloud apenas empieza este viaje! Ver las tímidas y expectantes caras nuevas al llegar y compararlas con las caras del final al no querer irse y rogar por que la semana laboral tenga más de 5 días no tiene precio! La OWN IT : No es una reunión cualquiera, es LA reunión. Un espacio mensual que compartimos entre todos los Equipos. Siempre arrancamos con una actividad bien arriba y después cada Equipo cuenta su aporte de valor a la empresa durante ese mes. Los festejos virtuales de cumpleaños : ¿Te imaginas ver a todos tus compañeros disfrazados de personajes de Disney? Eso pasó en mi último festejo de cumple! Siempre buscamos algo que le guste e identifique al cumpleañero y de ahí activamos y activamos de lo lindo: disfraces, cámaras con efectos, carteles hechos a MANO, música! En un día normal en Teracloud, te podes encontrar de todo, nunca te vas a aburrir y siempre vas a estar disfrutando y aprendiendo. Trabajar en teracloud significa para mí, un desafío diario. Pero de los lindos (no de los estresantes). Ese desafío que me invita a superarme diariamente, ese desafío que me invita a compartir con el resto del equipo mis logros y ese desafío que nos invita a todos, a trabajar todos los días por continuar creciendo como el lugar más feliz para trabajar en Córdoba, en Argentina y porque no en el mundo entero. Así que si hoy me preguntan si encontré el trabajo de mis sueños, ¡te digo que si! ¡te digo que encontré eso y muchísimo más! Tenemos infinitas oportunidades y desafíos diarios así que te invito a que si queres vivir la experiencia en primera persona, ¡te sumes y vayamos juntos to the cloud and beyond! 🚀​ Florencia Sánchez Talents Manager Teracloud To learn more about cloud computing, visit our blog for first-hand insights from our team. If you need an AWS-certified team to deploy, scale, or provision your IT resources to the cloud seamlessly, send us a message here .

  • Automate Slack Notifications with Dependabot and GitHub Actions

    Dependabot and Github working together Dependabot is a tool integrated with GitHub which allows us to automate analysis and updates of dependencies in our projects. It works by analyzing dependency files in our projects and verifying that no newer versions exist in the official repositories. Then, it creates automated Pull Requests (PRs) for dependencies out-of-date. Dependabot works in three ways: Listing vulnerabilities in the dependencies used in a project. Creating PRs for solving these vulnerabilities using the minimum required versions. Creating PRs to keep all dependencies using the latest version of them. This Teratip is to implement Slack notifications about vulnerabilities detected and automated PRs using GitHub actions. Dependabot configuration Requirements: Have admin permissions on the repository. The first step is to configure Dependabot in our repository by following the next steps: Go to the Security tab in the repository. Go to Dependabot in the ‘Vulnerability alerts’ section. Click on Configure and Manage repository vulnerabilities settings . 4. Then, in the Dependabot section below “Code security and analysis” we are going to enable Dependabot alerts and Dependabot security updates . Note that the Dependency graph should be automatically enabled after enable Dependabot alerts option. At this point, Dependabot is enabled and it will start looking for vulnerabilities and create automated PRs. Slack configuration Requirements: Be logged in to your Slack workspace. In Slack, we need one channel to receive notifications and a Slack app with one incoming webhook URL to be used for our GH Actions. It is assumed that the Slack channel already exists, and it does not matter whether it is public or private, so let's create the App: Go to https://api.slack.com/messaging/webhooks and click on the Create You Slack app button. Click the Create New App button and select the “ From scratch ” option. Choose a name for the App and select the workspace where the channel is. Then go to Incoming Webhooks and enable that option. Once Incoming webhooks are enabled you can Add New Webhook to Workspace . Select your Channel from the list and click on Allow . You should see something like this: GitHub Actions will use this Webhook URL. GitHub Actions configuration In this last step we will use three actions already created: To get notifications about PRs created by Dependabot: https://github.com/actions/checkout https://github.com/kv109/action-ready-for-review To get notifications about vulnerabilities detected by Dependabot: https://github.com/kunalnagarco/action-cve Since not all vulnerabilities can be resolved with automatic PRs, it is good to get notifications of all detected vulnerabilities. Now we need to create two workflows by adding the following YAML files in . github/workflows in the repository. dependabot-pr-to-slack.yaml name : Notify about PR ready for review on : pull_request : branches : ["main"] # Allows you to run this workflow manually from the Actions tab workflow_dispatch : jobs : slackNotification : name : Slack Notification if : startsWith(github.head_ref, 'dependabot/') # This step only runs when PR has dependabot/ HEAD runs-on : ubuntu-latest steps: # Latest version available at: https://github.com/actions/checkout/releases - uses : actions/checkout@v2.5.0 - name : Slack Notification # Latest version available at: https://github.com/kv109/action-ready-for-review/releases uses : kv109/action-ready-for-review@0.2 env : SLACK_CHANNEL : dependabot-notifications SLACK_WEBHOOK : ${{ secrets.SLACK_WEBHOOK }} This workflow runs every time that Dependabot creates a new PR. dependabot-vulns-to-slack.yaml name : 'Dependabot vulerabilities notification to Slack' on : schedule : - cron : '0 10 * * 1' # Cron # Allows you to run this workflow manually from the Actions tab workflow_dispatch : jobs : Notify-Vulnerabilites : runs-on : ubuntu-latest steps : # Latest version available at: https://github.com/kunalnagarco/action-cve/releases - name : Notify Vulnerabilities uses : kunalnagarco/action-cve@v1.7.15 with : token : ${{ secrets.PERSONAL_ACCESS_TOKEN }} # This secret need to be created slack_webhook : ${{ secrets.SLACK_WEBHOOK }} # This secret need to be created This workflow runs periodically based on cron expression. As is commented in the code, we need to add two secrets in our repository to be used in these workflows: PERSONAL_ACCESS_TOKEN and SLACK_WEBHOOK. To add both secrets follow these steps: Go to the Setting tab in the repository. Go to Secret → Actions in the ‘Security’ section. Click in New repository secret and add the followings: The names chosen are used in workflows, so if they are modified, then change them also in the YAML files. Also, we need to add SLACK_WEBHOOK secret in Secret → Dependabot in the same way that it did before. SLACK_WEBHOOK value is the URL created previously. PERSONAL_ACCESS_TOKEN could be created following these steps: Click on your profile and select Setting . Go to Developer settings . Click on Personal access token and choose Tokens (classic) . Click on Generate new token (classic) . Select the following permissions: Click on Generate token and copy the generated token. This token can’t be visible later, so be sure to copy it at this time. For this workflow PERSONAL_ACCESS_TOKEN must belong to an admin collaborator of the repository. Checking notifications in Slack Dependabot vulnerability notifications example: Dependabot PRs notifications example: In the Security tab →”Vulnerability alerts” section and Dependabot we can confirm that alerts are related to detected vulnerabilities and the automated PRs created. Final Thoughts Leveraging Dependabot alongside GitHub Actions for Slack notifications offers a streamlined approach to staying informed about version updates within your project's package ecosystem. By configuring Dependabot through the interval daily in the configuration file, you ensure timely awareness of any updates. This integration not only simplifies the tracking of changes but also enhances collaboration and communication among team members. For a detailed guide on setting up Dependabot with GitHub Actions and enabling Slack notifications, refer to the comprehensive documentation available on GitHub Docs. Just a click to enable, and you'll be effortlessly keeping pace with the latest version updates, promoting a more secure and efficient development environment. References: [1] Sending messages using Incoming Webhooks [2] Notifies via Slack about pull requests which are ready to review [3] Action for checking out a repo [4] A GitHub action that sends Dependabot Vulnerability Alerts to Slack [5] Create a personal access token for GitHub [6] GitHub event types [7] Events that trigger workflows Ignacio Rubio DevOps Engineer Teracloud If you want to know more about Github, we suggest checking GitHub Actions without AWS credentials To learn more about cloud computing, visit our blog for first-hand insights from our team. If you need an AWS-certified team to deploy, scale, or provision your IT resources to the cloud seamlessly, send us a message here .

  • Master Generative AI Cloud with AWS: Your Ultimate Resource

    Unlocking Creative Potential with Generative AI in the Cloud In today's rapidly evolving digital landscape, creativity thrives as a driving force behind innovation. Thanks to advancements in artificial intelligence (AI), particularly generative AI, we witness a profound transformation in how we approach creative endeavors. At the forefront of this revolution stands Amazon Web Services (AWS), offering a comprehensive suite of AI-powered services that revolutionize how we think about and harness creativity in the cloud. Generative AI: A Gateway to Boundless Creativity Recent years have seen remarkable advancements in the field of AI, particularly in generative AI, where machines are trained to create content, images, and even entire virtual environments. Amazon Web Services (AWS) has emerged as a frontrunner, spearheading the future of generative AI within the cloud environment. With its suite of innovative services like AWS Bedrock, AWS SageMaker, and Amazon Q, AWS empowers businesses to harness the power of generative AI to create proprietary AI models tailored to their unique needs such as large language models. AWS Bedrock: Building and Scaling Generative AI Applications with Foundation Models At the core of AWS's AI ecosystem lies AWS Bedrock, a foundational model that serves as the backbone for cutting-edge AI development. This powerful tool offers unparalleled advantages for creativity by providing a stable and reliable infrastructure for deploying and scaling AI solutions. With AWS Bedrock, developers and organizations can leverage the power of Generative AI with confidence, knowing they are built on a robust and secure foundation. These robust foundations enable customers to focus more on innovation and less on infrastructure management, accelerating the pace of AI-driven creativity. Additionally, AWS Bedrock fosters collaboration and interoperability across its AI-powered services, allowing users to seamlessly integrate AI capabilities into their workflows and pave the way for business experimentation. Amazon SageMaker: Democratizing AI Development Central to AWS's AI offerings lies Amazon SageMaker, a fully managed service that simplifies the process of building, training, and deploying machine learning algorithms at scale.  With SageMaker, users can access a wide range of algorithms and frameworks, enabling them to experiment with generative AI capabilities without the need for specialized expertise. This democratization of AI development empowers individuals and organizations to tap into their creative potential and experiment with their inputted data. Beyond Code: Empowering Creativity with Generative AI Tools Amazon CodeWhisperer revolutionizes the coding experience by offering intelligent code generation capabilities. During a preview period, participants using CodeWhisperer experienced a 27% increase in task completion rates and completed tasks 57% faster than those without it, highlighting its potential to revolutionize coding workflows. Further expanding the boundaries of creativity, Amazon Q in QuickSight offers a transformative approach to both visualize and analyze data. By combining natural-language querying with generative BI authoring capabilities, analysts can create customizable visuals and refine queries effortlessly. This empowers businesses to make data-driven decisions with clarity and precision, fueling creativity in strategic planning and execution. Healthcare Transformed: Revolutionizing Documentation with AWS HealthScribe AWS HealthScribe, a HIPAA-eligible service, empowers healthcare software vendors to automate clinical documentation processes. By combining speech recognition and generative AI, HealthScribe analyzes patient-clinician conversations to generate accurate and easily reviewable clinical notes, reducing the burden on healthcare professionals and enhancing patient care. Final Thoughts: Unleashing Limitless Possibilities with Generative AI The convergence of Generative AI and cloud computing, spearheaded by Amazon Web Services (AWS), is revolutionizing creativity across diverse domains. AWS's suite of innovative AI services enables customers to leverage generative AI and its applications, democratizing AI development, enhancing developer productivity, redefining business intelligence, and revolutionizing healthcare documentation. All in all, AWS's robust foundation empowers individuals and organizations to unleash their creative potential. As we continue to harness the power of Generative AI in the cloud, the possibilities for innovation and creativity are truly limitless. Ready to unlock the power of generative AI for your projects? Our cutting-edge AI services offer unparalleled creativity and efficiency. Take the next step towards revolutionizing your workflow and achieving your goals. Contact us now to explore how our generative AI services can elevate your endeavors today. Alan Bilsky Data Engineer Teracloud

  • Boost Kubernetes Security with KubeSec: Best Practices and Implementation

    Kubesec is an open-source Kubernetes security scanner and analysis tool. It scans your Kubernetes cluster for common exploitable risks such as privileged capabilities and provides a severity score for each found vulnerability. Security risk analysis for Kubernetes resources. • Take in a single YAML file as input. • One YAML can connect multiple Kubernetes resources. Kubesec is available as: • Docker container image at docker.io/kubesec/kubesec:v2 • Linux/MacOS/Win binary (get the latest release) • Kubernetes Admission Controller • Kubectl plugin Keep your cluster secure and follow me on a brief demo! First things first, we are going to define a bash script which is going to perform the scans on our yaml file by calling the KubeSec API. 1) Execute touch kubesec-scan.sh 2) Create our risky deployment! execute another touch command as follows: touch insecure-deployment.yaml Then, paste the following content (make sure you are using your image, it also can be a testing one. e.g public.ecr.aws/docker/library/node:slim): apiVersion: apps/v1 kind: Deployment metadata: labels: app: devsecops name: devsecops spec: replicas: 2 selector: matchLabels: app: devsecops strategy: {} template: metadata: labels: app: devsecops spec: volumes: - name: vol emptyDir: {} containers: - image: replace name: devsecops-container volumeMounts: - mountPath: /tmp name: vol 3) Back to our bash script, define some variables for a later usage, here we are going to make use of the KubeSec API. Open the newly created file with your preferred text editor and paste the following: #!/bin/bash # KubeSec v2 api scan_result=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan) scan_message=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan | jq .[0 scan_score=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan | jq .[0]. # Kubesec scan result processing # echo "Scan Score : $scan_result" 3) Alright! in the previous step we made some interesting calls to the KubeSec API, the first variable output is a big JSON object (we can see it if we uncomment the echo at the end of the script). Then, on the next two variables (since it's a json object) we are using jq CLI, a powerful and lightweight command-line JSON processor. Making use of it, we extract the scanning message and score. 4) We continue with the script edition, now it's time to log some exciting stuff! Add the following to our bash script: if [[ "${scan_score}" -ge 5 ]]; then echo "Score is $scan_score" echo "Kubesec Scan $scan_message" else echo "Score is $scan_score, which is less than or equal to 5." echo "Scanning Kubernetes Resource has Failed" exit 1; fi; This last section of the script is a basic bash conditional were we are checking the scan_score variable. If its greater or equal than 5 then is going to “pass” our requirements. Otherwise is going to fail. Note: Choose some relevant score numbers based on your application requirements. This example is just for demo purposes and not meant to run on production environments. The final script will look like this: #!/bin/bash # KubeSec v2 api scan_result=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan) scan_message=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan | jq .[0].me scan_score=$(curl -sSX POST --data-binary @"insecure-deployment.yaml" https://v2.kubesec.io/scan | jq .[0].scor # Kubesec scan result processing # echo "Scan Score : $scan_result" if [[ "${scan_score}" -ge 5 ]]; then echo "Score is $scan_score" echo "Kubesec Scan $scan_message" else echo "Score is $scan_score, which is less than or equal to 5." echo "Scanning Kubernetes Resource has Failed" exit 1; fi; Alternatively run it with docker as follows: #!/bin/bash scan_result=$(docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < insecure-deployment.yaml) scan_message=$(docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < insecure-deployment.yaml | jq .[].messag scan_score=$(docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < insecure-deployment.yaml | jq .[].score) 5) Time to see the power of the KubeSec scans! Execute the script. The output shows Which means… there are some security improvement opportunities. At this point, we begin to see the potentially integrations within our DevSecOps pipeline (see the extras section for a Jenkins example). 6) KubeSec made a good job by scanning our deployment. But… How do we leverage the security opportunities? If we go a few steps back, and we uncomment this line # echo "Scan Score : $scan_result" We are going to be able to see (under the scoring, advice section) an array of security items, their value in points and reason, among others. This is going to be a key component of our scans. Now we can take action. See the screenshot below. 7) Lets make some updates to our insecure deployment, under containers add the following: securityContext: runAsNonRoot: true runAsUser: 100 readOnlyRootFilesystem: true And under the spec section: serviceAccountName: default Finally, run the script once again and verify the new score. You should see something like this Awesome! with just a few steps we improve our kubernetes deployment security! Extra: Try integrating the solution on your DevSecOps pipeline! Below is an example on a Jenkinsfile pipeline { agent any stages { stage('Vulnerability Scan - Kubernetes') { steps { sh "bash kubesec-scan.sh" } } } } To read more about security best practices for Kubernetes deployments: https://kubernetes.io/blog/2016/08/security-best-practices-kubernetes-deployment/ References: https://kubesec.io/ https://www.jenkins.io/doc/book/pipeline/ Tomás Torales Cloud Engineer Teracloud If you want to know more about Security , we suggest checking Streamlining Security with Amazon Security Hub To learn more about cloud computing, visit our blog for first-hand insights from our team. If you need an AWS-certified team to deploy, scale, or provision your IT resources to the cloud seamlessly, send us a message here .

  • Streamline Terraform Setup on Mac M1: Docker and TFenv in Three Easy Steps

    Find the solution to Terraform compatibility conflicts on M1 architecture. This Teratip helps you to bypass the difficulties related to legacy Terraform vendor incompatibility on M1 architecture using Docker with Ubuntu Linux, so you can make your plans and apply them without relying on an external Linux environment. Step 1: Create your Dockerfile Use the following Dockerfile  to create a Docker image that includes Ubuntu 20.04 and tfenv : FROM ubuntu:20.04 ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update && \\     apt-get install -y --no-install-recommends git curl ca-certificates unzip && \\     apt-get clean && \\     rm -rf /var/lib/apt/lists/* RUN git clone < https://github.com/tfutils/tfenv.git > ~/.tfenv && \\     echo 'export PATH="$HOME/.tfenv/bin:$PATH"' >> ~/.bashrc ENV PATH="/root/.tfenv/bin:${PATH}" ARG TF_VERSION=0.15.4 RUN tfenv install $TF_VERSION && \\     tfenv use $TF_VERSION RUN tfenv --version && \\     terraform --version CMD ["tail", "-f", "/dev/null"] Step 2: Build the Docker Image With the Dockerfile ready, proceed to build your Docker image, setting the version of Terraform that you need with the following command: docker build --build-arg TF_VERSION=0.15.4 -t maosella/tfenv:0.25 . Step 3: Run and Work in your Container To run the Docker container, first go to the root of your Terraform project and run it: docker run -it -v ${PWD}:/workspace -w /workspace maosella/tfenv:0.25 /bin/bash This command executes the container and mounts the current directory (${PWD}) in /workspace  inside the container keeping the changes synchronized between them. The container by having a volume configured to the work repository, allows to work directly with our files in VSCode synchronizing with the files that we have in the directory workspace of our container. From the shell of the container we will be able to apply plan, apply and see our synchronized files without problems in the shell. With the -it flag you have an interactive shell to work with Terraform commands. Remember that you can change the version of Terraform with the command: tfenv install 1.0.0 tfenv use 1.0.0 Before executing Terraform commands, we will have to configure our environment variables with the Access Key and Secret Keys of our AWS user so that Terraform is authorized to enter our account. export AWS_ACCESS_KEY_ID="ASIAQATREYEPYOHALTB" export AWS_SECRET_ACCESS_KEY="l6YigMubZUu4fZdDFTQR/Xo4+Y9veTREFl17B/bA3" Security Considerations: It 's essential that you handle your AWS keys with caution. Be sure not to expose your keys in scripts or Dockerfiles. Now we have everything in order to use Terraform normally. Integration with Visual Studio Code (VSCode) (optional) VSCode extension: "Remote - Containers". In VSCode, you can install the Remote - Containers extension to manage the container filesystem as if you were working locally and all from VSCode. To do this, install the VSCode "Remote - Containers" extension to work with Docker containers directly from VSCode. You can find the extension here:   Remote - Containers . Final Thoughts Conclusion: With these three simple steps, you can have a fully functional Terraform environment on your Mac with M1, giving you the freedom and flexibility to work on your projects without restrictions, transforming the challenge of incompatibility into a productivity win with Docker🐳. Martin Osella Cloud Engineer Teracloud To learn more about cloud computing, visit our blog  for first-hand insights from our team. If you need an AWS-certified team to deploy, scale, or provision your IT resources to the cloud seamlessly, send us a message here .

  • EKS Cost Management: Using Kubecost Effectively in Your Cluster

    Kubecost is an efficient and powerful tool that allows you to manage costs and resource allocation in your Kubernetes cluster. It provides a detailed view of the resources used by your applications and helps optimize resource usage, which can ultimately reduce cloud costs. In this document, we’ll guide you through the necessary steps to use Kubecost in your Kubernetes cluster. Let’s dive in. Deploy Kubecost in Amazon EKS Step #1: Install Kubecost on your Amazon EKS cluster Step #2: Generate Kubecost dashboard endpoint Step #3: Access cost monitoring dashboard Overview of available metrics Final thoughts Deploy Kubecost in Amazon EKS To get started, follow these steps to deploy Kubecost into your Amazon EKS cluster in just a few minutes using Helm. Install the following tools: Helm 3.9+ , kubectl , and optionally eksctl and awscli . You have access to an Amazon EKS cluster . To deploy one, see Getting started with Amazon EKS . If your cluster is running Kubernetes version 1.23 or later, you must have the Amazon EBS CSI driver installed on your cluster. Step # 1: Install Kubecost on your Amazon EKS cluster. In your environment, run the following command from your terminal to install Kubecost on your existing Amazon EKS cluster. helm upgrade -i kubecost \ oci://public.ecr.aws/kubecost/cost-analyzer --version 1.99.0 \ --namespace kubecost --create-namespace \ -f https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/develop/cost-analyzer/values-eks-cost-monitoring.yaml Note: You can find all available versions of the EKS-optimized Kubecost bundle here . We recommend finding and installing the latest available Kubecost cost analyzer chart version. By default, the installation includes certain prerequisite software including Prometheus and kube-state-metrics . To customize your deployment (e.g., skipping these prerequisites if you already have them running in your cluster), you can find a list of available configuration options in the Helm configuration file . Step # 2: Generate the Kubecost dashboard endpoint. After you install Kubecost using the Helm command in step 2, it should take under two minutes to complete. You can run the following command to enable port-forwarding to expose the Kubecost dashboard: kubectl port-forward --namespace kubecost deployment/kubecost-cost-analyzer 9090 Step # 3: Access cost monitoring dashboard. On your web browser, navigate to http://localhost:9090 to access the dashboard. You can now start tracking your Amazon EKS cluster cost and efficiency. Depending on your organization’s requirements and set up, there are several options to expose Kubecost for on-going internal access. There are a few examples that you can use for your references: Check out the Kubecost documentation for Ingress Examples as a reference for using Nginx ingress controller with basic auth. Consider using the AWS Load Balancer Controller to expose Kubecost and use Amazon Cognito for authentication, authorization, and user management. You can learn more this How to use Application Load Balancer and Amazon Cognito to authenticate users for your Kubernetes web apps -Overview of available metrics The following are examples of the metrics available within the Kubecost dashboard. Use Kubecost to quickly see an overview of Amazon EKS spend, including cumulative cluster costs, associated Kubernetes asset costs, and monthly aggregated spend. -Cost allocation by namespace View monthly Amazon EKS costs as well as cumulative costs per namespace and other dimensions up to the last 15 days. This enables you to better understand which parts of your application are contributing to Amazon EKS spend. -Spend and usage for other AWS Services associated with Amazon EKS clusters View the costs of AWS infrastructure assets that are associated with their EKS resources. -Export Cost Metrics At a high level, Amazon EKS cost monitoring is deployed with Kubecost, which includes Prometheus , an open-source monitoring system and time series database. Kubecost reads metrics from Prometheus, then performs cost allocation calculations, and writes the metrics back to Prometheus. Finally, the Kubecost front end reads metrics from Prometheus and shows them on the Kubecost user interface (UI). The architecture is illustrated by the following diagram: -Kubecost reading metrics With this pre-installed Prometheus, you can also write queries to ingest Kubecost data in your current business intelligence system for further analysis. You can also use it as a data source for your current Grafana dashboard to display Amazon EKS cluster costs that your internal teams are familiar with. To learn more about how to write Prometheus queries, review Kubecost’s documentation or use example Grafana JSON models in the Kubecost Github repository as references. -AWS Cost and Usage Report (AWS CUR) integration To perform cost allocation calculations for your Amazon EKS cluster, Kubecost retrieves the public pricing information of AWS services and resources from AWS Price List API . You can also integrate Kubecost with the AWS CUR to enhance the accuracy of pricing information that is specific to your AWS account (e.g., Enterprise Discount Programs, Reserved Instance usage, Savings Plans, and Spot usage). You can learn more on how the AWS CUR integration works at AWS Cloud Integration . -Cleanup You can uninstall Kubecost from your cluster with the following command. helm uninstall kubecost --namespace kubecost Final thoughts Implementing Kubecost in your Amazon EKS cluster can significantly enhance your cost management and resource optimization efforts. By providing a comprehensive view of resource usage and associated costs, Kubecost empowers you to make informed decisions on optimizing resource allocation, which can lead to reduced cloud costs. Its easy deployment process using Helm makes it accessible to users with various levels of expertise. Additionally, Kubecost's integration with Prometheus enables you to leverage your existing business intelligence systems and Grafana dashboards for further analysis and visualization. Overall, Kubecost proves to be an invaluable tool for cost-conscious organizations seeking to maximize their Amazon EKS cluster's efficiency while keeping cloud expenditures in check. Give Kubecost a try today and take control of your Kubernetes cost management with ease. Martín Carletti Cloud Engineer Teracloud If you want to know more about Kubernetes , we suggest going check Conftest: The path to more efficient and effective Kubernetes automated testing To learn more about cloud computing, visit our blog   for first-hand insights from our team. If you need an AWS-certified team to deploy, scale, or provision your IT resources to the cloud seamlessly, send us a message here .

  • Teraweek, una semana para formar equipo

    Team Building El año 2020 trajo consigo grandes cambios en nuestra forma de vida. La inesperada y repentina crisis generada por la pandemia del COVID 19 tuvo profundas implicancias en nuestra rutina laboral. La modalidad de trabajo remoto o “home office” fue una tendencia forzada por la nueva realidad, cambiando súbitamente las costumbres y riquezas que trae consigo compartir una oficina. Modificamos nuestra manera de concebir el trabajo, nos adaptamos, pero siempre añoramos volver a nuestros ambientes laborales para recuperar lo más valioso que habíamos perdido, la interacción social y la comunicación cara a cara. A medida que las disposiciones legales se fueron flexibilizando, Teracloud volvió a abrir las puertas a su team. Disfrutamos de volver a encontrarnos, de compartir, de poder movernos libremente por las calles...pero todavía faltaba superar las burbujas laborales y encontrarnos todos juntos. Entendiendo esta necesidad y el deseo de sus empleados, en diciembre de 2021, Teracloud organizó la “ Teraweek ”, reuniendo a la mayor parte del team en su oficina central de Latinoamérica. Llegaron a Córdoba, “ teraclouders ” de Quilmes (Buenos Aires), de General Fernández Oro (Río Negro), de Tandil (Buenos Aires) y de Montevideo (Uruguay). Durante toda la semana, compartimos no sólo la jornada laboral, sino también desayunos llenos de anécdotas, almuerzos con largas sobremesas, cenas, salidas y juegos repletos de risa ¡Nos reencontramos! El broche de oro de esa semana, después de la alegría de volver a vernos algunos y conocernos otros, fue el viernes 17 de diciembre donde Teracloud nos sorprendió con una jornada recreativa al aire libre. " De la Teraweek me voy sin palabras. Siendo uruguayo y que me hayan dado la oportunidad de estar acá... me voy agradecido " - Rodrigo, DevOps Engineer Teracloud. Partimos a las sierras de Córdoba, con destino a la estancia Acuarela del Río, ubicada a orillas del río San Pedro en San Clemente (departamento Santa María). Allí, disfrutamos del río, hicimos caminatas a campo traviesa, practicamos yoga (o eso intentamos), nos desafiamos en campeonatos de pool, hablamos… hablamos mucho, nos disfrutamos, nos reencontramos y, hasta algunos, nos emocionamos. A pesar de que la pandemia continúa y siguen los desafíos a nivel personal y profesional, después de esta gran experiencia, comenzamos el 2022 con la certeza de que pronto volveremos a reunirnos y teniendo la seguridad de que Teracloud continúa fomentando el espíritu de equipo, acompañándonos y comprendiendo las situaciones personales que trajo consigo esta nueva realidad. " La Teraweek fue súper importante para conocernos. En ningún momento dudé en venir. Al contrario, estaba súper motivado. De hecho, voy a intentar volver en algún momento para seguir haciendo team building porque me parece genial. Me gusta la estructura remota, pero siento que a veces necesitamos un poco de contacto con nuestros compañeros " - Mariano, DevOps Engineer Teracloud. Victoria Vélez Funes SEO - SEM Specialist teracloud.io

  • Boost AWS Security with Trivy Vulnerability Scanning

    As we already know, AWS counts with a useful tool to scan our images for vulnerabilities when we push them to our registry. On this TeraTip we are going to add an extra security layer: we are going to make use of an open-source tool called Trivy. Trivy is a comprehensive and versatile security scanner. Trivy has scanners that look for security issues and targets where it can find those issues. Targets (what Trivy can scan): Container Image Filesystem Git Repository (remote) Virtual Machine Image Kubernetes AWS Scanners (what Trivy can find there): OS packages and software dependencies in use (SBOM) Known vulnerabilities (CVEs) IaC issues and misconfigurations Sensitive information and secrets Software licenses Let us begin with a demo on docker image scanning. 1) Install Trivy. In my case, locally and since Im using a ubuntu distribution I will proceed with the following: sudo apt-get install wget apt-transport-https gnupg lsb-release wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add - echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sourc sudo apt-get update sudo apt-get install tri 2) Execute a Trivy -v to verify the installation. 3) Now, we can run trivy image ${our_image_to_scan} For example: trivy image adoptopenjdk/openjdk8:alpine-slim The output 4) Let's try another one, run trivy image php:8.1.8-alpine The output Ok, this looks a bit more dangerous. 5) Fair enough. Now would be helpful to automate these scans to use on our DevSecOps pipelines. Create a file. touch trivy-docker-image-scan.sh With your IDE of choice open the file and paste the following content: #!/bin/bash dockerImageName=$(awk 'NR==1 {print $2}' Dockerfile) echo $dockerImageName These initial lines are going to grab the docker image from the Dockerfile and echo it to the terminal. 6) We continue editing our script. Trivy command, we are checking for different types of severity on our vulnerabilities. If the exit code of our Trivy image scan is other than CRITICAL we’ll return an exit code of 0 meaning there were no critical vulnerabilities found on the image. If the exit code is 1, then we are going to know without a doubt that we have critical vulnerabilities in our image. trivy image --exit-code 0 --severity MEDIUM,HIGH $dockerImageName trivy image --exit-code 1 --severity CRITICAL $dockerImageName 7) The previous step is very delightful , but How do we leverage our DevSecOps pipelines with this information? Here is where we can take action on a building pipeline (or not) depending on our exit codes. Let's add the bash conditional. # Trivy scan result processing exit_code=$? echo "Exit Code : $exit_code" # Check scan results if [[ "${exit_code}" == 1 ]]; then echo "Image scanning failed. Vulnerabilities found" exit 1; else echo "Image scanning passed. No CRITICAL vulnerabilities found" fi; Alright! now we are able to scan our docker images and take action based on the exit code that relies on the vulnerabilities found. Let's take a look at the final script and how we can implement it on a Jenkins pipeline. #!/bin/bash dockerImageName=$(awk 'NR==1 {print $2}' Dockerfile) trivy image --exit-code 0 --severity MEDIUM,HIGH $dockerImageName trivy image --exit-code 1 --severity CRITICAL $dockerImageName # Trivy scan result processing exit_code=$? echo "Exit Code : $exit_code" # Check scan results if [[ "${exit_code}" == 1 ]]; then echo "Image scanning failed. Vulnerabilities found" exit 1; else echo "Image scanning passed. No CRITICAL vulnerabilities found" fi; Jenkinsfile #!/bin/bash pipeline { agent any stages { stage('Trivy Vulnerability Scan - Docker') { steps { sh "bash trivy-docker-image-scan.sh" } } } } Note: There are some necessary steps to configure Jenkins, install the required plugins, the dependencies, and so on, but since this is not a Jenkins TeraTip and for briefness purposes, we keep it as simple as possible. References: https://aquasecurity.github.io/trivy/v0.18.3/examples/others/ https://aquasecurity.github.io/trivy/v0.18.3/installation/#nixnixos https://www.jenkins.io/doc/book/pipeline/ Tomás Torales Cloud Engineer Teracloud If you want to know more about Cloud Security, we suggest going check What AWS Re: Invent brings us in terms of Security . To learn more about cloud computing, visit our blog  for first-hand insights from our team. If you need an AWS-certified team to deploy, scale, or provision your IT resources to the cloud seamlessly, send us a message here

  • Secure S3 Website Hosting with AWS Cloudfront and Cognito Authentication

    Sometimes we need to protect our website (or part of it) from unauthorized access, this can be tricky because we need to think of a custom way of authentication module, or well, a third-party platform to integrate with our system, we also need to be concerned about the availability and performance of this service as it grows. In this Teratip we will discover a new way of deploying our web static content to a high-availability service such as AWS S3 , using Cloudfront as CDN that helps you to distribute your content quickly and reliably with high speed. As mentioned before, we need to protect from unauthorized access, so we will implement AWS Cognito as an authentication service, using JWT for session management via AWS Lambda . High Availability Website To begin, we will define if we want to host a new S3 website or use an existing website, we can deploy our static web content to an S3 private bucket and access it via Cloudfront using OAI, our Terraform module will allow you to set your domain and aliases and then will create the Cloudfront Distribution, S3 Bucket, and even SSL Certificates (using Amazon Certificate Manager ). Authentication Process Good, we have our highly available website already running but we can notice that anyone can access it, but don’t worry because our module will create a new Authentication process that will be triggered by Cloudfront when a user request access to our website. Lambda@Edge Lambda@Edge lets you run AWS Lambda functions in an AWS location close to your customer in response to CloudFront events. Your code can be triggered by Amazon CloudFront events such as requests for content by viewers or requests from CloudFront to origin servers. You can use Lambda functions to change CloudFront requests and responses at the following points: JWT Authentication JSON Web Token (JWT) is a JSON-based open standard for creating access tokens that assert a series of claims as a JSON object. There are several benefits to using Lambda@Edge for authorization operations. First, performance is improved by running the authorization function using Lambda@Edge closest to the viewer, reducing latency and response time to the viewer request. The load on your origin servers is also reduced by offloading CPU-intensive operations such as verification of JSON Web Token (JWT) signatures. We will implement the JWT authentication process via Lambda functions using NodeJS 14.x in this case. Amazon Cognito Well, now we will talk about AWS Cognito , Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Also scales to millions of users and supports sign-in with social identity providers, such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0 and OpenID Connect. So using this service our authentication process will look as follows: Our Terraform module will create the Cognito user pool for us and add the Cognito Login URL to the Cloudfront response in case we don’t are authenticated. Then, we can sign-up new users via Cognito login URL, or better, we can access to Cognito service in our AWS account and manage users, add new authentication providers, change password policies, and a lot of other things. Finally, our architecture will work as follows: Access for the first time without an authenticated user: Once we authenticate with a Cognito user: PoC Terraform module This solution is easy to deploy because we build a Terraform module that, with a few variables, can deploy the entire infrastructure for us, below is an example that creates the website https://cloudfront-s3-cognauth.sandbox.teratest.net , a Cognito user pool, and finally uploads an index.html to the S3 bucket to check if after authentication we can access to the website. module "cloudfront-s3-cognito" { source= "git::git@github.com:teracloud-io/terraform_modules//services/web-cloudfront-s3-cognito" #Region which S3 website and Cognito services will be stored - default is us-east-1 region = "us-west-2" #Name of the service that will be used as a subdomain for your website service = "cloudfront-s3-cognauth" #Name of the domain that will be used in web URL (this domain must exist in Route53) domain = "sandbox.teratest.net" #The name of the Lambda function that will handle the JWT Authentication process lambda_name = "cognito-jwt-auth" #If you want to create an index.html for S3 website this variable must be untagged index_example = true #In order to add a logo to Cognito login page you need to set the path/filename logo = "logo.png" } Conclusion That's all you need! With a few lines of Terraform, we've created a Frontend application in S3 with a CDN as Cloudflare, SSL certificates, and authentication mechanisms that protect it. Protecting frontend and even backend code has never been easier, and doing so at an infrastructure level enables you to let your apps focus on just what they ought to. References AWS blog on validating JWTs with Lambda@Edge: https://aws.amazon.com/blogs/networking-and-content-delivery/authorizationedge-how-to-use-lambdaedge-and-json-web-tokens-to-enhance-web-application-security/ Sample code to decode and validate JWTs in python and typescript: https://github.com/awslabs/aws-support-tools/tree/master/Cognito/decode-verify-jwt Authorization with Lambda@Edge and JSON Web Tokens (JWTs): https://github.com/aws-samples/authorization-lambda-at-edge/blob/master/ Accessing cookies in Lambda@Edge: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html Nicolas Balmaceda DevOps Teracloud  To learn more about cloud computing, visit our blog for first-hand insights from our team. If you need an AWS-certified team to deploy, scale, or provision your IT resources to the cloud seamlessly, send us a message here .

  • Self-Managed ArgoCD Explained: Benefits and Best Practices

    The answer is yes, ArgoCD can manage itself. But how? You may ask. Then, read this TeraTip to know how you can set up your ArgoCD to manage itself. First of all, what is ArgoCD? ArgoCD is a GitOps Continuous Delivery tool for Kubernetes, and it can help you by managing all your cluster resources constantly comparing their state on the cluster against the repositories of those resources. First of all we will create a Minikube cluster for this PoC minikube start Once we got our cluster running let’s proceed installing ArgoCD on the argocd namespace using Helm and the official helm chart from the Argo project. helm repo add argo https://argoproj.github.io/argo-helm helm install argo argo/argo-cd -n argocd Now, it's time to implement something known as the App of Apps pattern. The App of Apps pattern consists in having an ArgoCD Application which consists of other ArgoCD Applications. You can take this repository as an example: https://github.com/JuanWigg/self-managed-argo Basically here we have a main application, which is called applications . This main application will synchronize with our self-managed-argo repo, and, on this repo we have all of our other ArgoCD applications, for example a kube-prometheus stack, core applications, elastic search, and so on, but the most important thing is that we have an application for ArgoCD itself. The main applications looks something like this: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: applications namespace: argocd spec: project: default destination: namespace: default server: https://kubernetes.default.svc source: repoURL: https://github.com/JuanWigg/self-managed-argo targetRevision: HEAD path: applications syncPolicy: automated: prune: false # Specifies if resources should be pruned during auto-syncing ( false by default ). selfHeal: true # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ). allowEmpty: false # Allows deleting all application resources during automatic syncing ( false by default ). syncOptions: - CreateNamespace=true As you see, the path for the application is applications . We have that same folder on our repo, where we have all the applications ArgoCD it’s going to manage (including itself). Just as an example i will leave the ArgoCD application code here: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: argocd namespace: argocd spec: project: default destination: namespace: argocd server: https://kubernetes.default.svc source: chart: argo-cd repoURL: https://argoproj.github.io/argo-helm targetRevision: 5.27.1 helm: releaseName: argo syncPolicy: automated: prune: false # Specifies if resources should be pruned during auto-syncing ( false by default ). selfHeal: true # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ). allowEmpty: false # Allows deleting all application resources during automatic syncing ( false by default ). Make sure that the version you put on the application matches the version you deployed early with Helm. Lastly, you need to apply the main application on the cluster using: kubectl apply -f applications.yaml And there you have it! Now you have ArgoCD managing itself and all your applications in your cluster! Juan Wiggenhauser Cloud Engineer Teracloud To learn more about cloud computing, visit our blog for first-hand insights from our team. If you need an AWS-certified team to deploy, scale, or provision your IT resources to the cloud seamlessly, send us a message here .

  • Serverless Deployment: Simplifying Application Management

    There are many ways to deploy infrastructure as code, but today’s Teratip is about a special one we like to use: Serverless. As with many IAC tools, we start by writing a text file and then running a binary that controls the creation of what we declared. However serverless has a control advantage over the infrastructure that requires AWS resources like Lambda functions or DynamoDB. In less than 50 lines of YAML code, you can create a state-of-the-art infrastructure using S3 Buckets, DynamoDB, and more with all the required policies to keep it safe. For example, a Yaml like the following will create an S3 bucket, a DynamoDB table, and the infrastructure for the function to communicate them: service: quicksite frameworkVersion: ">=1.1.0" provider: name: aws runtime: nodejs10.x environment: DYNAMODB_TABLE: ${self:service}-${opt:stage, self:provider.stage}-uniqname iamRoleStatements: - Effect: Allow Action: - dynamodb:Query - dynamodb:Scan - dynamodb:GetItem - dynamodb:PutItem - dynamodb:UpdateItem - dynamodb:DeleteItem Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.DYNAMODB_TABLE }" functions: create: handler: fn/create.create events: - http: path: fn method: post resources: Resources: MyBucket: Type: AWS::S3::Bucket Properties: BucketName: ${self:service}-${opt:stage, self:provider.stage}-uniqname AccessControl: PublicRead MyDb: Type: 'AWS::DynamoDB::Table' DeletionPolicy: Retain Properties: AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 TableName: ${self:provider.environment.DYNAMODB_TABLE} Once you have your Yaml file, Serverless will compile it for Cloudformation making the full deployment of its content and keeping records for its modification in the future. Nice, isn't it? Give it a try. Start at https://www.serverless.com/ . Let us know if you like Serverless and we’ll keep you updated with more Teratips about it. Juan Eduardo Castaño DevOps Engineer Teracloud To learn more about cloud computing, visit our blog for first-hand insights from our team. If you need an AWS-certified team to deploy, scale, or provision your IT resources to the cloud seamlessly, send us a message here .

bottom of page