138 items found for ""
- `S3 Website + Cloudfront CDN with Authentication via AWS Cognito
Sometimes we need to protect our website (or part of it) from unauthorized access, this can be tricky because we need to think of a custom way of authentication module, or well, a third-party platform to integrate with our system, we also need to be concerned about the availability and performance of this service as it grows. In this Teratip we will discover a new way of deploying our web static content to a high-availability service such as AWS S3, using Cloudfront as CDN that helps you to distribute your content quickly and reliably with high speed. As mentioned before, we need to protect from unauthorized access, so we will implement AWS Cognito as an authentication service, using JWT for session management via AWS Lambda. High Availability Website To begin, we will define if we want to host a new S3 website or use an existing website, we can deploy our static web content to an S3 private bucket and access it via Cloudfront using OAI, our Terraform module will allow you to set your domain and aliases and then will create the Cloudfront Distribution, S3 Bucket, and even SSL Certificates (using Amazon Certificate Manager). Authentication Process Good, we have our highly available website already running but we can notice that anyone can access it, but don’t worry because our module will create a new Authentication process that will be triggered by Cloudfront when a user request access to our website. Lambda@Edge Lambda@Edge lets you run AWS Lambda functions in an AWS location close to your customer in response to CloudFront events. Your code can be triggered by Amazon CloudFront events such as requests for content by viewers or requests from CloudFront to origin servers. You can use Lambda functions to change CloudFront requests and responses at the following points: Receive Our News! JWT Authentication JSON Web Token (JWT) is a JSON-based open standard for creating access tokens that assert a series of claims as a JSON object. There are several benefits to using Lambda@Edge for authorization operations. First, performance is improved by running the authorization function using Lambda@Edge closest to the viewer, reducing latency and response time to the viewer request. The load on your origin servers is also reduced by offloading CPU-intensive operations such as verification of JSON Web Token (JWT) signatures. We will implement the JWT authentication process via Lambda functions using NodeJS 14.x in this case. Amazon Cognito Well, now we will talk about AWS Cognito, Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Also scales to millions of users and supports sign-in with social identity providers, such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0 and OpenID Connect. So using this service our authentication process will look as follows: Our Terraform module will create the Cognito user pool for us and add the Cognito Login URL to the Cloudfront response in case we don’t are authenticated. Then, we can sign-up new users via Cognito login URL, or better, we can access to Cognito service in our AWS account and manage users, add new authentication providers, change password policies, and a lot of other things. Finally, our architecture will work as follows: Access for the first time without an authenticated user: Once we authenticate with a Cognito user: PoC Terraform module This solution is easy to deploy because we build a Terraform module that, with a few variables, can deploy the entire infrastructure for us, below is an example that creates the website https://cloudfront-s3-cognauth.sandbox.teratest.net, a Cognito user pool, and finally uploads an index.html to the S3 bucket to check if after authentication we can access to the website. module "cloudfront-s3-cognito" { source="git::git@github.com:teracloud-io/terraform_modules//services/web-cloudfront-s3-cognito" #Region which S3 website and Cognito services will be stored - default is us-east-1 region = "us-west-2" #Name of the service that will be used as a subdomain for your website service = "cloudfront-s3-cognauth" #Name of the domain that will be used in web URL (this domain must exist in Route53) domain = "sandbox.teratest.net" #The name of the Lambda function that will handle the JWT Authentication process lambda_name = "cognito-jwt-auth" #If you want to create an index.html for S3 website this variable must be untagged index_example = true #In order to add a logo to Cognito login page you need to set the path/filename logo = "logo.png" } Conclusion That's all you need! With a few lines of Terraform, we've created a Frontend application in S3 with a CDN as Cloudflare, SSL certificates, and authentication mechanisms that protect it. Protecting frontend and even backend code has never been easier, and doing so at an infrastructure level enables you to let your apps focus on just what they ought to. References AWS blog on validating JWTs with Lambda@Edge: https://aws.amazon.com/blogs/networking-and-content-delivery/authorizationedge-how-to-use-lambdaedge-and-json-web-tokens-to-enhance-web-application-security/ Sample code to decode and validate JWTs in python and typescript: https://github.com/awslabs/aws-support-tools/tree/master/Cognito/decode-verify-jwt Authorization with Lambda@Edge and JSON Web Tokens (JWTs): https://github.com/aws-samples/authorization-lambda-at-edge/blob/master/ Accessing cookies in Lambda@Edge: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html Nicolas Balmaceda DevOps Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇
- Github Reusable Workflows
Keep your workflows DRY Github Reusable Workflows It is common in an organizational environment to have multiple applications built with the same technologies or frameworks, and even having them sharing a common CI/CD pipeline. This, most of the time ends up in having to repeat a lot of code to, for example, deploy a Terraform infrastructure, upload a container image to a registry and other similar jobs. For this reason, to keep your pipelines DRY (Don’t Repeat Yourself) we bring you Github Reusable Workflows How a normal workflow looks like Here we have an example of how a normal workflow to upload a container image to ECR looks like: build_n_push: name: Build and Push to ECR runs-on: ubuntu-latest outputs: image_tag: ${{ steps.set-image-tag.outputs.image_tag }} latest_tag: ${{ steps.set-latest-tag.outputs.latest_tag }} steps: - name: Checkout code uses: actions/checkout@v2 with: submodules: true fetch-depth: 0 - name: Set Image tag id: set-image-tag shell: bash run: | echo "::set-output name=image_tag::$(echo ${{ github.sha }} | cut -c1-12)" - name: Set LATEST tag id: set-latest-tag shell: bash run: | if [ ${{ github.ref }} == 'refs/heads/develop' ]; then echo "::set-output name=latest_tag::latest"; else echo "::set-output name=latest_tag::latest-staging"; fi - name: Configure AWS Credentials id: configure-credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: us-east-1 - name: Login to Amazon ECR id: login-container-registry uses: aws-actions/amazon-ecr-login@v1 - name: Build and Push Container Image id: build-container-image env: ECR_REGISTRY: ${{ steps.login-container-registry.outputs.registry }} ECR_REPOSITORY: my_ecr_repo run: | docker build . \ --progress plain \ --file Dockerfile \ --tag $ECR_REGISTRY/$ECR_REPOSITORY:${{ steps.set-image-tag.outputs.image_tag }} docker tag \ $ECR_REGISTRY/$ECR_REPOSITORY:${{ steps.set-image-tag.outputs.image_tag }} \ $ECR_REGISTRY/$ECR_REPOSITORY:${{ steps.set-latest-tag.outputs.latest_tag }} docker push $ECR_REGISTRY/$ECR_REPOSITORY:${{ steps.set-image-tag.outputs.image_tag }} docker push $ECR_REGISTRY/$ECR_REPOSITORY:${{ steps.set-latest-tag.outputs.latest_tag }} Now imagine having to copy this workflow on each one of your applications (even you will need to copy your deployment workflows too!). This is a lot of tedious and repetitive work, also this involves a lot of human interaction, therefore making our pipelines more prone to have errors. It would be great to have all this standar code in a place, where we can maintain it more easily and which we can use from other repos in our organization. Here is where Reusable Workflows come handy. How to make a workflow “reusable” It is easy to create a reusable workflow for your Github Actions CI/CD pipeline, but we have to keep in mind some limitations related to them before starting: Reusable workflows can’t call other reusable workflows You can’t call reusable workflows in a private repository unless you are in the same repository. Environment variables aren’t propagated from caller workflow to called workflow (don’t worry we can solve this with inputs) So, now we can define our first reusable workflow: name: Build and push a Docker image to ECR on: workflow_call: inputs: ECR_REPO: required: true type: string AWS_REGION: required: false type: string default: "us-east-1" outputs: image_tag: description: Image tag created on the workflow value: ${{ jobs.build_n_push.outputs.image_tag }} latest_tag: description: Latest tag created on the workflow value: ${{ jobs.build_n_push.outputs.latest_tag }} secrets: AWS_ACCESS_KEY_ID: required: true AWS_SECRET_ACCESS_KEY: required: true jobs: As you can see, with just adding a few things we can create a reusable workflow. We have the following parameters: on.workflow_call: this tells Github that this workflow will be triggered via call of another workflow. inputs: this is non sensitive data that we can pass to our workflow, in this case we are passing the ECR Registry and the AWS Region. outputs: this is data that our workflow will output to other jobs in the caller workflow. In this example we are setting as output the image tag and the latest tag for that image (we can chain this output to our deploy workflow 😉 ) secrets: sensitive data that will be used by our workflow. After all these parameters, all we have to do is to define our workflow like a normal one, specifying all the jobs that will run along with their steps. Receive Our News! But… How can I use this workflow? To make use of our new reusable workflow first of all you can have it on the same repo of your application, or a better solution would be to have this workflow on a public repository of your organization. This makes your workflow publicly accessible unless you do a trick I will teach you in the end 🤫. We have two ways to invoke our workflow, the first one if it is on the same repo: name: Continuous Deployment on: push: branches: - main - develop jobs: build_and_push_image: name: Build container image uses: .github/workflows/docker_build_and_push.yaml@master with: AWS_REGION: us-east-1 ECR_REPO: my-repo secrets: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} And now if we have our public repo with the workflow: name: Continuous Deployment on: push: branches: - main - develop jobs: build_and_push_image: name: Build container image uses: our-org/our-workflow-repo/.github/workflows/docker_build_and_push.yaml@master with: AWS_REGION: us-east-1 APP_NAME: my-repo secrets: inherit About the inherit, you can use it when both your repos have access to the same secrets, and they have the same name in each repo (like using organizational secrets). BONUS TRACK: private reusable workflows Now… maybe you want to have this workflow on a public repository so you can call it from all your applications repositories but you may not like that everyone else can call your workflow. Well there is a simple trick to ensuring only your organization can call your workflow, you can use the following template: jobs: check_org: name: Check Caller runs-on: ubuntu-latest steps: - name: Check the calling organization if: ${{ github.repository_owner != 'my-org'' }} uses: actions/github-script@v3 with: script: | core.setFailed('This reusable workflow can only be used by My Org.') my_normal_job: needs: check_org name: This is my normal job runs-on: ubuntu-latest steps: #.... As you can see, we are using a Github context variable called repository_owner, this ensures that if the calling repo isn’t from our organization it can’t use the workflow (at least from our public repo). Remember to add the needs: check_org on your actual job. Juan Wiggenhauser DevOps Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇
- A Refrigeration Management Software that grows in the cloud
How can retailers improve their refrigeration management software to reduce energy and maintenance costs, avoid service calls, reduce refrigerant leak rates, and more? Easy, with Axiom Cloud. And how a Cloud Software Provider like Axiom Cloud can improve and scale their service? Even easier, with Teracloud´s help. Axiom Cloud is a seed-stage company that has been around in the grocery retail market for over two years. They are a software-based company that provides services to B2B and SaaS retail customers to improve their grocery refrigeration management software. From the beginning, they were conceived as a cloud company taking advantage of the great benefits of the technology. In this interview, his CTO and Co-founder, Nikhil Saralkar, tell us about their journey and how Teracloud helped them improve their backend to scale their business. At first, their prototype “did what it needed to do”, Nikhil tells us, but they knew that the company was a growing business. So they needed to get it right to be able to scale and be competitive. “Our initial taken served a different use case. It was much more of a traditional IoT. We were expecting a bunch of both cloud and edge-style operations. The truth is that it turned out we were really providing straight-up cloud-hosted software for customers. There was no edge component. So some of the things we were building were really irrelevant”, says Nickhil. As they tackled this challenge, there were a number of things they needed to do and improved. For example, setting up the environments the way they should have been according to best practices, the software itself, and improving the Infrastructure as Code, among others. Long story short, Axiom Cloud was using a third party to build a new content structure for them, but the output was not what they expected. In addition, they were struggling with DevOps in general… until Teracloud came into the scene through a referral from a senior cloud and software advisor. What were your expectations when you first contacted Teracloud? At this point, I had worked with a number of contractors, be it remote individuals, or agencies. Working with contractors is difficult. So, I tried to be open-minded because of the referral but I was thinking ´it is not going to surprise me, it's gonna be similar to other experiences, I will talk with someone who´s going to provide a sales pitch…` At first, I spoke to Alejandro Pozzi - CEO and Co-founder of Teracloud - and I was surprised someone who was so high in the company talked to me, as well as I was impressed with his level of technical ability. Another thing that surprised me was the personability of the Customer Success manager, Carolina, and the DevOps engineer, Leandro. Both were very accommodating and willing to work with us. They actually listened to what our problems were and came up with good solutions, as opposed to what we've experienced in the past. They tried to get the business and then implement whatever solution was available. I was pleasantly surprised, so we moved on. What can you say about your experience working with us? We are happy with the work Teracloud is doing. They listen to us, they really hear what the problems are. The engineers come up with good solutions. I think they are technical, accessible, and very communicative over Slack. They are very flexible as a team. I definitely appreciate that. What can you say about the technical solutions provided by Teracloud? They proposed a new architecture for accessing our services, VPN strategies. Another big thing is that they've been refactoring our infrastructure's code. From the beginning, there was a lot of unnecessary code and services. We built our platform with the intention of doing it the right way but I think we overbuilt it. So, at first, we refactored it into a much cleaner Terraform and tried to eliminate all the things that were not necessary. They've been doing all kinds of work for us on the backend since. Would you recommend Teracloud? They are very good at functioning as an extension of our DevOps team. Based on my experience, if someone is looking to augment their team with really solid DevOps engineers and managers who listen to the customer and work within your sprint with you and your team…then yeah, I recommend it. How do imagine your company and Cloud Computing five years from now? The highest likelihood is that there's an acquisition for the company. I think that will be the main provider of this kind of service for grocery retail. And if we can break through to building heating and cooling, we can also provide some more services to a bigger market. In general, judging by the last five years of development, there's gonna be incremental advances in technology everywhere. The stuff that we're using today is not gonna necessarily be in the same form. But I think the way people do business it's not gonna be so different from what we see today. Through cloud providers, offering up tools, using computer languages…for the most part, the world is going to be similar when it comes to mainstream stuff. The big world card, I think, is what's gonna happen with Web3 and how that's going to change the landscape, but I see that in ten years. The other thing I see is a massive computer operation of Internet-connected devices and how we manage operations and security. So, Teracloud will have plenty of business. =) Anything you want to add? I´d like to pass along customer feedback. I like working with Carolina and Leandro. I appreciate them as people who really care about their customers. Carolina definitely follows up on a lot of things and Leandro is a great engineer. I enjoy working with them. I hope they do very well, they're great assets to the team. Great work guys! And thanks Nikhil for your time and for sharing your thoughts and experience with us! Raúl Verde Paz Marketing Consultant Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.
- GitHub Actions without AWS credentials
Use GitHub Actions without the need to share AWS credentials as secrets Many times when we need to connect to AWS through GitHub Actions, the first thing that comes to mind is to take the access credentials of an IAM user that we have created, and use them as environment variables in our workflow file, in order to authenticate a user in AWS. But this method is not the most secure, as we need to hand over our AWS credentials. Luckily for us, there is another method that we can use to authenticate to AWS through GitHub Actions without using our credentials. You can assume an IAM role. Let's get started! Identity provider First, add GitHub as OpenID provider to IAM Identity providers. This connects AWS and GitHub, so they can exchange tokens. Provider URL: https://token.actions.githubusercontent.com Audience: sts.amazonaws.com IAM role Instead of a user, you have to create a role with a trust relationship. It’s a relationship between the role and the added GitHub identity provider. Press Next to add permissions. Once the IAM Role is created, the last step is changing the trust relationship condition to restrict usage of the role. Click on “Edit trust relationship” to start editing. And add: .. "Condition": { "StringLike": { "token.actions.githubusercontent.com:sub":"repo:organization/repository:*" } } Update the workflow file Finally, we can update the workflow file. This is a simplified version of a workflow file: name: Deploy on: push: jobs: deploy: name: Deploy runs-on: ubuntu-18.04 permissions: id-token: write contents: read steps: - uses: actions/checkout@v2 - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v1 with: role-to-assume: arn:aws:iam::111111111111:role/deploy-xyz.tech aws-region: eu-central-1 - name: Build What changed? The secrets containing AWS credentials have been removed: env: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} And we replace it with this: permissions: id-token: write contents: read steps: - uses: actions/checkout@v2 - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v1 with: role-to-assume: arn:aws:iam::111111111111:role/deploy-xyz.tech aws-region: eu-central-1 Using Terraform The configuration via Terraform is quite simple and it does not require any advanced knowledge. Let's start creating our OpenID Connect provider: resource "aws_iam_openid_connect_provider" "github" { url = "https://token.actions.githubusercontent.com" client_id_list = ["sts.amazonaws.com"] thumbprint_list = ["6938fd4d98bab03faadb97b34396831e3780aea1"] } Both information url and client_id_list are given by GitHub in their documentation that you can read it here The thumbprint_list it's generated based on the certificate's ssl key of the openid from Github. This value is based on the url, so this is static and you can just copy & paste without any hassle. The next step is to create our policy document, setting permission to our repositories do assume role: data "aws_iam_policy_document" "github_actions_assume_role" { statement { actions = ["sts:AssumeRoleWithWebIdentity"] principals { type = "Federated" identifiers = [aws_iam_openid_connect_provider.github.arn] } condition { test = "StringEquals" variable = "token.actions.githubusercontent.com:aud" values = ["sts.amazonaws.com"] } condition { test = "StringLike" variable = "token.actions.githubusercontent.com:sub" values = [ "repo:xyz1/*:*", "repo:xyz2/*:*" ] } } In the example above, any repository from xyz1 or xyz2 have permissions sts:AssumeRoleWithWebIdentity therefore they can assume a role that we will create next. We now need to finally create our role and associate it to the policy document previously created: resource "aws_iam_role" "github_actions" { name = "github-actions" assume_role_policy = data.aws_iam_policy_document.github_actions_assume_role.json } After that, another policy document must be created, but this time it will contain permissions for the Github Actions. We will have permission to do some ECR operations on AWS, respecting the only rule that our registry must have a Tag permit-github-action=true: data "aws_iam_policy_document" "github_actions" { statement { actions = [ "ecr:BatchGetImage", "ecr:BatchCheckLayerAvailability", "ecr:CompleteLayerUpload", "ecr:GetDownloadUrlForLayer", "ecr:InitiateLayerUpload", "ecr:PutImage", "ecr:UploadLayerPart", ] resources = ["*"] condition { test = "StringEquals" variable = "aws:ResourceTag/permit-github-action" values = ["true"] } } Notice that in the example above we are using ECR but nothing blocks you from give any other permission to other AWS services. And finally, we need to create our policy based on the policy document we created earlier and attach it to the role: resource "aws_iam_policy" "github_actions" { name = "github-actions" description = "Grant Github Actions the ability to push to ECR" policy = data.aws_iam_policy_document.github_actions.json } resource "aws_iam_role_policy_attachment" "github_actions" { role = aws_iam_role.github_actions.name policy_arn = aws_iam_policy.github_actions.arn } As a last step for the Terraform part, we need to create our registry and add a Tag to it: resource "aws_ecr_repository" "repo" { name = "xyz/repo" image_tag_mutability = "IMMUTABLE" image_scanning_configuration { scan_on_push = true } tag = { "permit-github-action" = true } } Finally we update the workflow file in the same way that we did previously in the steps without Terraform. All done! And that’s it. Now you can remove the user! Or disable the access key id first, just to be sure. Thanks for reading! Julian Catellani Devops Engineer Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.
- ¿Cómo es trabajar en Teracloud? Un viaje to the cloud and beyond! 🚀
Durante muchos años, estuve buscando el trabajo de mis sueños. En mis “cortos” 30 años, pase por diferentes trabajos, en rubros y empresas distintas, con variadisimos equipos de trabajo. Un día y en medio de plena pandemia, sin querer queriendo, me encontré con Teracloud. De a poco me empecé a adentrar en el mundo del Cloud Computing y lo enorme de AWS. Arranque conociendo la cultura DevOps. Conocí personas con una experiencia terrible y sobre todo, con ganas de crecer y con mucha pasión por lo que hacen. Me encontré con un equipo con muchas skills técnicas pero también con una capacidad increíble de superarse día a día. No hay un día en que esta enorme familia no esté aprendiendo nuevos conocimientos. No hay un día en el que alguien del team no te saque una sonrisa. Y más importante aún, no hay un minuto en el que no nos estemos divirtiendo. ¿La parte más linda? Cuando estamos todos juntos (de manera remota o físicamente en Córdoba Capiiiital). Y acá aprovecho para contarte sobre algunos de nuestros eventos más especiales que hacen a nuestro ADN cultural. La onboarding week: Esa primera semana, que coordinamos en Córdoba, en nuestras oficinas. ¿El objetivo más importante? Una buena dosis de la cultura de Teracloud apenas empieza este viaje! Ver las tímidas y expectantes caras nuevas al llegar y compararlas con las caras del final al no querer irse y rogar por que la semana laboral tenga más de 5 días no tiene precio! La OWN IT: No es una reunión cualquiera, es LA reunión. Un espacio mensual que compartimos entre todos los Equipos. Siempre arrancamos con una actividad bien arriba y después cada Equipo cuenta su aporte de valor a la empresa durante ese mes. Los festejos virtuales de cumpleaños: ¿Te imaginas ver a todos tus compañeros disfrazados de personajes de Disney? Eso pasó en mi último festejo de cumple! Siempre buscamos algo que le guste e identifique al cumpleañero y de ahí activamos y activamos de lo lindo: disfraces, cámaras con efectos, carteles hechos a MANO, música! En un día normal en Teracloud, te podes encontrar de todo, nunca te vas a aburrir y siempre vas a estar disfrutando y aprendiendo. Trabajar en teracloud significa para mí, un desafío diario. Pero de los lindos (no de los estresantes). Ese desafío que me invita a superarme diariamente, ese desafío que me invita a compartir con el resto del equipo mis logros y ese desafío que nos invita a todos, a trabajar todos los días por continuar creciendo como el lugar más feliz para trabajar en Córdoba, en Argentina y porque no en el mundo entero. Así que si hoy me preguntan si encontré el trabajo de mis sueños, ¡te digo que si! ¡te digo que encontré eso y muchísimo más! Tenemos infinitas oportunidades y desafíos diarios así que te invito a que si queres vivir la experiencia en primera persona, ¡te sumes y vayamos juntos to the cloud and beyond! 🚀 Florencia Sánchez Talents Manager Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.
- HOW to create SSL certificates from a third party, import them to AWS and don’t fail in the process
During the last days of the past year, I received the request of updating SSL certificates, probably, like most of you. As we know sometimes it is automated via AWS but on other occasions is necessary to get your hands dirty ;-) This time I first needed to create a CSR (Certificate Signing Request) with a private key to provide the client, so he could request a third party for the new certificate. I made this by running the following command: openssl req -new -out .csr -newkey rsa:2048 -nodes -sha256 -keyout .key -config citi.conf Notice that I passed to the command a conf file where I set all details: [req] default_bits = 2048 prompt = no default_md = sha256 req_extensions = req_ext distinguished_name = req_distinguished_name [ req_distinguished_name ] C=US ST=New York L=Rochester O=End Point OU=Testing Domain emailAddress=your-administrative-address@your-awesome-existing-domain.com CN = www.your-new-domain.com [ req_ext ] subjectAltName = @alt_names [ alt_names ] DNS.1 = your-new-domain.com DNS.2 = www.your-new-domain.com EOF Check that your Certificate Signing Request (CSR) has the correct signature by running the following. openssl req -in CSR.csr -noout -text To define if it has been created for all the domains you need openssl req -in CSR.csr -noout -text | grep DNS A new certificate will be generated from the CSR, and you can just copy them from the site or download them. If you choose to download them, be sure to choose as File Type Indivudual .crts (zipped) [According to the provider is possible that some steps be different]. The last step is to import the certificate to AWS Certificate Manager, you will find the following fields: On the Certificate Body, you must paste the first individual cert, name as your domain site. Certificate private key refers to the key you generated for creating the CSR certificate. Finally, the Certificate chain is the intermediate certificate you have on the Digicert page. Check that the name has a “CA”. After filling in the three fields and clicking on Next you can add a tag to identify your new Certificate and that’s it! You’re ready to update it on your load balancer, Cloudfront distribution, etc. Lourdes Dorado DevOps Engineer Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.
- Growth to Success: Building a Gazzelle
Gazelles are agile, small and fast; In addition to that, they have an excellent vision, the simile with the animal is very well chosen because these companies have a constant, fast and above-average growth, both in terms of turnover and job creation. If the business grows at or faster than its market, then it has one of the characteristics of a gazelle company. One of the biggest challenges for companies today is to grow, since companies that have the ability to grow exponentially are the ones that are most likely to remain in the market, receive investment and grow to make the leap and become more profitable businesses. large, become Gazelle companies, but this is not an easy task. At Teracloud we are characterized by the fact that we are creative when it comes to proposing a solution and because we always think of bringing a good experience to our clients; We are convinced that through clarity, planning and innovation we are building a growing path full of opportunities. We started in 2018 with the idea of offering high-quality DevOps services and with a great passion for cloud technology and today we have become a company that has managed to internationalize its services and adapt to all the changes we have had. We are committed to education and the democratization of technology within the IT community. Since 2018 Teracloud has been growing year to year in a persistent way. In 2020 COVID19 impacted tremendously in the startups ecosystem so our growth was slower than previous years but we never stopped. In 2021 we have returned to our growing path were we defined a plan with a 50% growth for the following 3 years. The planned growth is based on company strategic desitions but mainly on Marketing and Commercial approach. In this way, and precisely because of innovation and disruption, startups like us seek to make great strides in the market and become a business model with high growth potential. Actually, growing a company requires different skills than creating it, which is why we want to share with you some of our growth tips that have been working: Focus on the customer, not the competition. It generates an excellent community with which there is reciprocity of knowledge and learning spaces. Motivate your team. Keep your lead funnel active, always keeping your target audience in mind. Do not focus so much on size but on developing the ability to successfully face and respond to your environment. Train your team constantly, nothing better than putting the acquired knowledge into practice. Delegate functions. Measure results to make decisions. In other words, preparing your business for successful scaling from the start can help ensure that your company joins the ranks of the gazelles or unicorns. Something we can't forget about is that, ir very important to bear in mind that workers have a great influence, since a company without a cohesive team and with good leadership, it is difficult for it to get ahead. Focus, focus and focus... and if there are doubts, more focus Companies that enter into rapid growth processes are very focused on market niches or in certain geographical areas and have very close and strong relationships with their clients and, in addition to this, they have a very strong degree of focus in their strategy, They are able to sustain their competitive advantage. And all this is possible because they are configured as companies that are led by a high-performance promotional team, where there is a strong fit in the aspirations of the team members. They have the capabilities to take risks, to manage the critical success factors of the business because they know them. They all row in the same direction. Liliana Medina Growth Manager Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.
- Document Design Tips for non designers
At the moment we are on a digital era that gives us design tools everywhere whenever we want. Without going any further on the discipline of design, the fact that we can open a word document or google docs and be able to, with a simple click, make the tipography heavier (bold), and even transform it to a new one would be a dream for the the old typographers of the fifteenth century. And that’s something you have to take advantage on. We are here to talk about some tips that probably will help you make anyone who reads your documents be grateful with you. Altought the world of design is bigger enough to get lost in explanations and details (and believe me, design is based on detail), we can establish some “rules” that will help you sort your creations and make them more likeable. Let’s get this started. 1) You don’t have to be an expert to use contrast as an advantage. Has it ever happened to you that you didn't get to read something? Or do it but in a very difficult way? Reading has to be almost an involuntary act, so any altercation that makes you realize that you are actually reading is a sign that something compositionally is going wrong. Let’s check a practical example. Did you see how it changes? You can follow a fairly simple rule: Dark + Dark = NO. Light + Light = DEFINITELY NOT. Light + Dark = YES, ABSOLUTELY. (same otherwise) Going by complementary colors can also help you a lot! (the ones facing each other on the color wheel). Guiding yourself based on these two principles, you can play with your contrasts so that your receivers really read what you are saying. If we have an extensive text, it is always more appropriate to use light backgrounds: by that we will facilitate reading and prevent the eye from getting tired. This does not mean that we cannot use dark backgrounds, but it is more appropriate to use them on headlines or shorter texts. 2) If you justify, let it be to the right. With this section we are going to learn a new concept: “saccade movement”. Our eye, in the reading process, pay attention to the words it reads while "jumping" forward to get an idea of what is coming next. In simpler words: you don't finish reading a word that your brain is already reading the next one. An alignment like the justified one can be a bit laborious for your brain in this sense. Paragraph breaks with this type of alignment don't usually ends in a way that benefits your “brain reading”. Let’s check this in an example: Those spaces between words make our brain feel a little confused. Justifying to the right, or using a right alignment can save your text from that, and make your brain do a much more relaxed reading. 3) Sometimes, size do matter. Maybe it's time to start looking twice if what we are composing is actually readable. If we use a typographic size that’s too small, your text will not be readable (obviously). But if, on the other hand, we use a very large typographic body, the effect will be similar. What we can begin to do is use specific typographic sizes according to the purpose. For example, for a reading text in pdf/Virtual format, it’s recommended to use between 11 and 13pt (typographic size measurement), while in a printed text the ideal would be to adjust the size of the reading text between 8 and 10pt. Pay attention to what your text is going to be used for, wich format it will take to be readen and for whom (if it is someone older, for example, it may require a slightly larger body). This will allow you to define better what body size is best for your document so it can be easier to read. Surely, whoever is going to read that document will thank you later. 4) Hierarchizing everywhere. Is this a text or a title? Part of the paragraph or a quote? Let your document speak before it is read. Think about it like a family tree: The title is grandpa (the greatest size), do you have subtitles? Great, let them be the parents (Intermediate size), the reading text can be the children (a smaller size). Do you have direct quotes? The cousins! Differentiate them in some way (it can be through typographical variables: Italic, light, etc). Each part of your text has its role, and like any role within a set, it must be able to be delimited and differentiated. Give the reader a hint about what to expect as soon as they open the document. For you Devops, is there code involved in a documentation? You already know what to do. 5) Value the blank spaces. There is a very popular phrase (more than a phrase, a commandment) in design that is “less is more”. When generating a presentation or putting together a pdf, it is important that you respect the space around each element. Let's see this in an example: Overlap, for example, a text with some form of bright color or surrounding it with elements (and worse, if those elements have different colors) will make it difficult to read. On the contrary, if we let the elements breathe within the page, we will achieve a much more enjoyable reading and the task of "hierarchizing" them will become much easier. Keep it simple, and make the reading job simple as well. 6) Be a copycat. Pay attention to documents that you have read, slides that have been shown to you. Absorb everything you remember seeing and thinking: “I read this very well, this looks very good”. The brain always tends to simplify things and run away from those that are not as easy to decipher or understand, so when we find ourselves in front of something that is well designed, we will automatically recognize it (same happens the other way). “Copying” is not a bad word, you got to where you are today by copying yourself: you learned to speak by copying your responsible adults, you learned to write by copying from a blackboard or sheet of paper. Everything you know to this day you acquired in this way, it is by copying that you incorporate new knowledge, and only after being incorporated you can adapt it to your own form and style. And that’s it! You have on your power everything you need to know to transform your documentation and presentations so you can be the best host for your audience. And remember, you don’t have to be a designer to make your work look better. If you want to know more about typography and document design i suggest to you go check this amazing book: Inside Paragraph : Typographic Fundamentals by Cyrus Highsmi. Victoria Giménez Community Manager Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.
- Data encryption at rest
One of the most important parts of any architecture is data protection and encryption, when used correctly, can provide an additional layer of protection. Following recommendations from security pillar of Well Architected Framework, we have to encrypt storage types by way of rendering it unintelligible to unauthorized access For this, AWS KMS helps you to manage encryption keys and integrates with many AWS services, like S3 or EBS. For this cases, you can apply server-side encryption in two ways By console: Go to EC2 console -» EBS Encryption And go to Manage On S3, select the bucket and go to Properties Select Edit to modify encryption configuration and enable Server-Side Encryption At this point, you can choose to use an AWS managed key or create another KMS key. Also you can enable bucket key to reduce calls to KMS as well as to reduce KMS costs If you want to apply it with IaC (with terraform in this case): resource "aws_s3_bucket" "MyBucket" { bucket = "my-bucket-name" server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { sse_algorithm = "aws:kms" kms_master_key_id = aws_kms_key.MyKMSKey.arn } } } } resource "aws_ebs_encryption_by_default" "MyVolume" { enabled = true } If you find it interesting and want to go deeper into the subject, you can read Ken Beer (General Manager AWS KMS) about the importance of encryption in this blog Ezequiel Domenech DevOps Engineer Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.
- Reduce your CloudWatch Costs in On-premise Environments
One of the most important services that AWS Provide us is CloudWatch This service allows you to monitor your resources and be alert to possible failures. You can use CloudWatch to collect metrics (Like cpu usage, disk usage, memory, etc) directly from your Resources and send them to AWS so you can determine how the current performance and status of your resources are. Among these resources, there may be devices and peripherals from which it’s not necessary to collect metrics. For example: if you have an OnPremise server running Container Services, you may just need to collect metrics from Devices that belong to the Physical server and not from the Containers! Like Cpu usage of one Docker container, or Disk usage. But why am I telling you this? Because Once CloudWatch is installed, it starts collecting metrics from All the resources by default and this may result expensive. A solution for this is to modify the Agent Configuration file. So here is a tip for you: if you are running CloudWatch in a Linux Server, file configuration is located in the following path: /opt/aws/amazon-cloudwatch/etc/amazon-cloudwatch-agent.d/ You can edit it with any editor program like vi, vim, or nano Check this is an example: As you can see, we have a “metrics” section. In the “resources” attribute we have to indicate which resources we want to collect data from. By default it has the “*” value, so which includes All your resources. Let’s say you want to collect Disk Metrics only from your root partition ( / ), this is how the configuration file should be in that section: And that’s it, as simple as that! In this way you can avoid possible expensive Cloudwatch Costs in your billing. Don’t forget to restart CloudWatch agent once you modify the file For more information about how you can install Amazon CloudWatch please check this: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-commandline-fleet.html Rodrigo González DevOps Engineer Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.
- Teratip: How to monitor an ECS task running on Fargate with Datadog
Let’s say you already have Datadog configured to monitor your workloads in AWS and you want to get more insights from some ECS tasks running on Fargate. In order to do that, you will need to add the Datadog Agent to your task as a sidecar container -i.e. an additional container that runs alongside the application container. Below is an example of the container definitions block of an ECS task definition. The first container is a custom application and the second one is the Datadog Agent: [ { "name": "post-migrations-production", "cpu": ${cpu_units}, "memory": ${max_memory}, "memoryReservation": ${min_memory}, "image": "${ecr_repo}", "essential": true, "environment": [ { "name": "DD_SERVICE_NAME", "value": "post-migrations" } ], }, { "name": "datadog-agent", "image": "public.ecr.aws/datadog/agent:latest", "environment": [ { "name": "ECS_FARGATE", "value": "true" }, { "name": "DD_API_KEY", "value": "xxxxxxxxxxxxxxxxxxxxxxxxxx" } ] } ] To enable monitoring on Fargate, you have to set two environment variables: ECS_FARGATE to true and DD_API_KEY with your Datadog API key. This way, the next time the task runs, CPU, memory, disk, and network usage of your ECS Fargate cluster will be monitored on Datadog. Collecting traces and APM data Now if you want to collect traces and APM data from your application, you will have to allow the DD Agent to communicate on the container’s port 8126 and add the DD_APM_ENABLED and DD_APM_NON_LOCAL_TRAFFIC environment variables to the Agent container definition as well: "containerDefinitions": [ { "name": "datadog-agent", "image": "public.ecr.aws/datadog/agent:latest", "portMappings": [ { "hostPort": 8126, "protocol": "tcp", "containerPort": 8126 } ], "environment": [ { "name": "ECS_FARGATE", "value": "true" }, { "name": "DD_API_KEY", "value": "xxxxxxxxxxxxxxxxxxxxxxxxxx" }, { "name": "DD_APM_ENABLED", "value": "true" }, { "name": "DD_APM_NON_LOCAL_TRAFFIC", "value": "true" } ] } Receive Our News! Visualizing APM data Datadog uses flame graphs to display distributed traces. It means that it shows all the service calls that comprise a unique request. Final words By configuring the Datadog Agent as a carside container in the ECS task definition of your application running on Fargate, you can collect a lot of metrics, traces, and APM data that will help you when troubleshooting. For more information, visit https://www.datadoghq.com/blog/aws-fargate-monitoring-with-datadog/ Lucas Valor DevOps Engineer Teracloud.io If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs.
- K8s Cluster Auto-scalers: Autoscaler vs Karpenter
Autoscaling in a nutshell When we are working with workloads that dynamically demand more or fewer resources in terms of CPU or memory we need to think of solutions that allow us to deploy and fit these workloads in production. In this post we will talk about a few concepts like autoscaling: "Autoscaling is a method used in cloud computing that dynamically adjusts the number of computational resources in a server farm - typically measured by the number of active servers - automatically based on the load on the farm" Good, now we know what auto-scaling is and in which case it is used. If you have an e-commerce app is probably that you need auto scalers many times a year, an example is Amazon Prime Day where the traffic going into servers may grow for a few hours. K8s Autoscaling Kubernetes is one of the container orchestration platforms with major automation capabilities. Kubernetes autoscaling helps optimize resource usage and costs by automatically scaling a cluster up and down in line with demand. Kubernetes enables autoscaling at the cluster/node level as well as at the pod level, two different but fundamentally connected layers of Kubernetes architecture. K8s Autoscaler (Native) Autoscaler is a Kubernetes native tool that increases or decreases the size of a Kubernetes cluster (by adding or removing nodes), based on the presence of pending pods and node utilization metrics. Its functions are: Adds nodes to a cluster whenever it detects pending pods that could not be scheduled due to resource shortages. Removes nodes from a cluster, whenever the utilization of a node falls below a certain threshold defined by the cluster administrator. K8s Cluster Autoscaler Issues The cluster Autoscaler only functions correctly with Kubernetes node groups/instance groups that have nodes with the same capacity. For public cloud providers like AWS, this might not be optimal, since diversification and availability considerations dictate the use of multiple instance types. When a new pod with different needs that node groups already configured is scheduled, it’s necessary to configure a new node group, tell Autoscaler about it, set how to scale it, set some weights on it. We have no control of the zones where a node will be created. Karpenter Karpenter is an open-source node provisioning project built for Kubernetes. The project was started by AWS and currently is supported only and exclusively in it, although Karpenter is designed to work with other cloud providers. Unlike Autoscaler, Karpenter doesn’t have node groups, it will talk directly to EC2 and “put things” directly in a zone that we want. We can just say “hey EC2, give me that instance in that zone" and that's all. Advantages VMs based on workloads: Karpenter can pick the right instance type automatically. Flexibility: It can work with different instance types, with different zones, and with quite a few other different parameters. Group-less node provisioning: Karpenter working directly with VMs and that's speeds things up drastically. Pods bound to nodes before they are even created so everything happens faster. Multiple provisioners: We can have any number of provisioners (with Autoscaler there is only one config), Karpenter will pick the provisioner that matches the requirements. Receive Our News! Step by step The first thing we need to do is to create the cluster itself. Next, we have to create a VPC or use one already created, then we need to add additional tags to your subnets so that Karpenter knows what is my cluster (if you use terraform you can add these tags in the VPC resource). Next, we have to create IAM Roles for Karpenter Controllers. Once IAM is created, we can deploy Karpenter through Helm. Finally, we can deploy your provisioners and feel the Karpenter power. You can choose the preferred way to deploy, in this case, we will use Terraform but you also can use CloudFormation PoC Preparing enviroment Karpenter is easy to deploy but is necessary to prepare all the environments beforehand (iam node role, iam controller role, iam spot role, etc). This process can be a bit tedious due to AWS security needs. We go to deploy Karpenter in a test environment; first, we need to set up some environment variables and deploy eks.tf and vpc.tf (we select us-east-1 as region): export CLUSTER_NAME="${USER}-karpenter-demo" export AWS_DEFAULT_REGION="us-east-1" terraform init terraform apply -var "cluster_name=${CLUSTER_NAME}" The deployment will fail because we don’t set our kube-config file We can run the below command to redeploy config maps correctly: export KUBECONFIG="${PWD}/kubeconfig_${CLUSTER_NAME}" export KUBE_CONFIG_PATH="${KUBECONFIG}" terraform apply -var "cluster_name=${CLUSTER_NAME}" Because we going to use Spot services is necessary to add an extra IAM Role: aws iam create-service-linked-role --aws-service-name spot.amazonaws.com Next, we have to deploy Karpenter IAM Roles (kcontroller.tf and knode.tf) and deploy Karpenter as a Helm package through Terraform (karpenter.tf). It is necessary to use “terraform init” because we going to use a new module: terraform init terraform apply -var "cluster_name=${CLUSTER_NAME}" Feel the Karpenter power In this case, we have an EKS cluster with one t3a.medium (2vCPU and 4Gbi) as node default and one application deployment with 5 replicas of 1vCPU and 1Gbi so our node will not be able to deploy them, then Karpenter will notice this failed implementation and it takes care of fixing it. First, we go to analyze the Karpenter namespace and resources (one service and one deployment of one replicaset that deploy one pod). Then, we will deploy the application YAML file and will be able to see that our node can’t run it. Next, we will deploy the provisioner YAML file and will be able to see that Karpenter will scale up automatically to a new node (c6i.2xlarge) that will be able to run the above deployment. Finally, we will to destroy the application deployment and will be able to see how Karpenter will take care of destroying automatically the nodes after 30 seconds that were empty. Conclusion Karpenter is an excellent replacement for the native Autoscaler. Autoscaler isn't a final solution but as the blueprint and now we have a really good solution or to be more precise some of us have a really good solution but others don't because is only supported on AWS, the closest thing we have to Karpenter would be GKE Autopilot for GCP, but with some differences. Back to Karpenter, it's more intelligent and can give us more flexibility, it must be the next tool that you use if you currently work with autoscaling in EKS. References Karpenter Official Documentation: https://karpenter.sh/v0.6.4/ Terraform resources: https://registry.terraform.io/providers/hashicorp/aws/latest/docs DevOps Toolkit channel: https://www.youtube.com/c/DevOpsToolkit Terraform PoC repository: https://github.com/teracloud-io/karpenter-blog Nicolas Balmaceda DevOps Engineer teracloud.io If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇