Blog | Teracloud
top of page

147 items found for ""

  • Velero for Disaster Recovery in EKS Cluster

    Introduction Velero is a robust tool for Kubernetes disaster recovery, enabling users to backup, migrate, and restore applications and persistent volumes. This section provides guidance on using Velero as a disaster recovery strategy within an Amazon EKS cluster. Objectives The primary objectives of implementing Velero for disaster recovery are as follows: Efficient Backup Strategies: Leverage Velero to create periodic backups of your EKS cluster resources, ensuring minimal data loss in case of a disaster. Automated Scheduling: Utilize Velero schedules to automate the backup process, reducing manual intervention and ensuring regular snapshots. Seamless Restore Operations: Develop clear restore strategies using Velero manifests, allowing for a quick and efficient recovery process. Considerations Backup Frequency: Determine an appropriate backup frequency based on the criticality of your applications and data. Retention Policies: Define retention policies for your backups to manage storage costs effectively. Backup and restore workflow Velero consists of two components: A Velero server pod that runs in your Amazon EKS cluster A command-line client (Velero CLI) that runs locally Whenever we issue a backup against an Amazon EKS cluster, Velero performs a backup of cluster resources in the following way: The Velero CLI makes a call to the Kubernetes API server to create a backup CRD object. The backup controller: Checks the scope of the backup CRD object, namely if we set filters. Queries the API server for the resources that need a backup. Compresses the retrieved Kubernetes objects into a .tar file and saves it in Amazon S3. Similarly, whenever we issue a restore operation: The Velero CLI makes a call to Kubernetes API server to create a restore CRD that will restore from an existing backup. The restore controller: Validates the restored CRD object. Makes a call to Amazon S3 to retrieve backup files. Initiates restore operation. Velero also performs backup and restore of any persistent volume in scope: If you are using Amazon Elastic Block Store (Amazon EBS), Velero will create Amazon EBS snapshots of persistent volumes in scope. For any other volume type (except hostPath), use Velero’s Restic integration to take file-level backups of the contents of your volumes. At the time of writing, Restic is in Beta, and therefore not recommended for production-grade backups. Steps 1. Velero Installation. You can easily follow the official guide to the complete Velero installation. This guide also outlines the creation of the necessary resources to set up before configuring Velero. https://velero.io/docs/v1.0.0/aws-config/ If you want, you can make this installation using helm too, which is another way you choose.  (https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/values.yaml), Remember to create the AWS needed resources before this installation. 2. Check resources creation. After the successful installation and configuration, we can check the successful creation of all resources (IAM Role, S3 bucket) and the Velero pod running correctly. Below is a list with all the available verbs of Velero. 3. Schedule Backups. Create a Velero schedule manifest (schedule.yaml) to define the backup frequency and included namespaces. Example: 4. Restore from Backup. In the event of a disaster, use a Velero restore manifest (restore.yaml) to initiate the recovery process. Example: 5. Validation. Regularly validate your disaster recovery strategy by simulating restore operations in a non-production environment. Martín Carletti Cloud Engineer Teracloud Fabricio Blas Cloud Engineer Teracloud

  • Discover the untapped power of Generative AI Cloud with AWS

    Unlocking Creative Potential with Generative AI in the Cloud In today's rapidly evolving digital landscape, creativity thrives as a driving force behind innovation. Thanks to advancements in artificial intelligence (AI), particularly generative AI, we witness a profound transformation in how we approach creative endeavors. At the forefront of this revolution stands Amazon Web Services (AWS), offering a comprehensive suite of AI-powered services that revolutionize how we think about and harness creativity in the cloud. Generative AI: A Gateway to Boundless Creativity Recent years have seen remarkable advancements in the field of AI, particularly in generative AI, where machines are trained to create content, images, and even entire virtual environments. Amazon Web Services (AWS) has emerged as a frontrunner, spearheading the future of generative AI within the cloud environment. With its suite of innovative services like AWS Bedrock, AWS SageMaker, and Amazon Q, AWS empowers businesses to harness the power of generative AI to create proprietary AI models tailored to their unique needs such as large language models. AWS Bedrock: Building and Scaling Generative AI Applications with Foundation Models At the core of AWS's AI ecosystem lies AWS Bedrock, a foundational model that serves as the backbone for cutting-edge AI development. This powerful tool offers unparalleled advantages for creativity by providing a stable and reliable infrastructure for deploying and scaling AI solutions. With AWS Bedrock, developers and organizations can leverage the power of Generative AI with confidence, knowing they are built on a robust and secure foundation. These robust foundations enable customers to focus more on innovation and less on infrastructure management, accelerating the pace of AI-driven creativity. Additionally, AWS Bedrock fosters collaboration and interoperability across its AI-powered services, allowing users to seamlessly integrate AI capabilities into their workflows and pave the way for business experimentation. Amazon SageMaker: Democratizing AI Development Central to AWS's AI offerings lies Amazon SageMaker, a fully managed service that simplifies the process of building, training, and deploying machine learning algorithms at scale.  With SageMaker, users can access a wide range of algorithms and frameworks, enabling them to experiment with generative AI capabilities without the need for specialized expertise. This democratization of AI development empowers individuals and organizations to tap into their creative potential and experiment with their inputted data. Beyond Code: Empowering Creativity with Generative AI Tools Amazon CodeWhisperer revolutionizes the coding experience by offering intelligent code generation capabilities. During a preview period, participants using CodeWhisperer experienced a 27% increase in task completion rates and completed tasks 57% faster than those without it, highlighting its potential to revolutionize coding workflows. Further expanding the boundaries of creativity, Amazon Q in QuickSight offers a transformative approach to both visualize and analyze data. By combining natural-language querying with generative BI authoring capabilities, analysts can create customizable visuals and refine queries effortlessly. This empowers businesses to make data-driven decisions with clarity and precision, fueling creativity in strategic planning and execution. Healthcare Transformed: Revolutionizing Documentation with AWS HealthScribe AWS HealthScribe, a HIPAA-eligible service, empowers healthcare software vendors to automate clinical documentation processes. By combining speech recognition and generative AI, HealthScribe analyzes patient-clinician conversations to generate accurate and easily reviewable clinical notes, reducing the burden on healthcare professionals and enhancing patient care. Final Thoughts: Unleashing Limitless Possibilities with Generative AI The convergence of Generative AI and cloud computing, spearheaded by Amazon Web Services (AWS), is revolutionizing creativity across diverse domains. AWS's suite of innovative AI services enables customers to leverage generative AI and its applications, democratizing AI development, enhancing developer productivity, redefining business intelligence, and revolutionizing healthcare documentation. All in all, AWS's robust foundation empowers individuals and organizations to unleash their creative potential. As we continue to harness the power of Generative AI in the cloud, the possibilities for innovation and creativity are truly limitless. Ready to unlock the power of generative AI for your projects? Our cutting-edge AI services offer unparalleled creativity and efficiency. Take the next step towards revolutionizing your workflow and achieving your goals. Contact us now to explore how our generative AI services can elevate your endeavors today. Alan Bilsky Data Engineer Teracloud

  • Domain by GoDaddy, DNS by Route53; HOW TO Enable DNSSEC in your domains

    Introduction The Domain Name System Security Extensions (DNSSEC) is a set of specifications that extend the DNS protocol by adding cryptographic authentication for responses received from authoritative DNS servers. Its goal is to defend against techniques that hackers use to direct computers to rogue websites and servers. DNSSEC adds two important features to the DNS protocol: Data origin authentication allows a resolver to cryptographically verify that the data it received came from the zone where it believes the data originated. Data integrity protection lets the resolver know that the data hasn't been modified in transit since it was originally signed by the zone owner with the zone's private key. How do DNS resolvers know how to trust in the DNSSEC keys? A zone's public key is signed, just like the other data in the zone. However, the public key is not signed by the zone's private key, but by the parent zone's private key. Every zone's public key is signed by its  parent zone, except for the root zone: it has no parent to sign its key. Therefore, the root zone's public key is an important starting point for validating DNS data. If a resolver trusts the root zone's public key, it can trust the public keys of top-level zones signed by the root's private key, such as the public key for the org zone. And because the resolver can trust the public key for the org zone, it can trust public keys signed by their respective private key, such as the public key for icann.org. (In actual practice, the parent zone doesn't sign the child zone's key directly--the actual mechanism is more complicated--but the effect is the same as if the parent had signed the child's key.) The sequence of cryptographic key signing is called a chain of trust. How much does it cost to enable DNSSEC in AWS? Amazon Route 53 does not charge you to enable DNSSEC signing on your public hosted zones or to enable DNSSEC validation for Amazon Route 53 Resolver. However, when you enable DNSSEC signing on your public hosted zones, you incur AWS Key Management Service (KMS) charges for storing the private key and using the instances of the key to sign your zones. For more information about KMS charges, see the AWS KMS pricing page. Note that you can choose to use a single customer-managed AWS KMS key that is stored in KMS across multiple public hosted zones. How do we enable DNSSEC? Let’s consider we have a root zone in AWS, where we host all our domains, but it still depends on GoDaddy, for example, how could we enable DNSSEC in this case? First of all, we need to take some considerations: DNS propagation can take anywhere from a few minutes to 24 hours, depending on the geographical location of the user, the type of DNS record being updated, and the TTL (time to live) value set for the record. During this time, the updated DNS information may not be available to all users and systems immediately. Pre-requisites To configure DNSSEC for a domain, your domain, and DNS service provider must meet the following prerequisites: The registry for the top-level domain (TLD) must support DNSSEC. To determine whether the registry for your TLD supports DNSSEC, see Domains that you can register with Amazon Route 53. The DNS service provider for the domain must support DNSSEC. You must configure DNSSEC with the DNS service provider for your domain before you add public keys for the domain to Route 53. The number of public keys that you can add to a domain depends on the TLD for the domain: .com and .net domains – up to thirteen keys All other domains – up to four keys Before start recommendations Lowering the zone's maximum TTL will help reduce the wait time between enabling signing and the insertion of the Delegation Signer (DS) record. Lowering the zone's maximum TTL to 1 hour (3600 seconds) allows us to roll back after only an hour if any resolver has problems caching signed records. Lower the SOA TTL and SOA minimum field. The SOA minimum field is the last field in the SOA record data. The SOA TTL and SOA minimum field determines how long resolvers remember negative answers. After you enable signing, Route 53 name servers start returning NSEC records for negative answers. The NSEC contains information that resolvers might use to synthesize a negative answer. If you have to roll back because the NSEC information caused a resolver to assume a negative answer for a name, then you only have to wait for the maximum of the SOA TTL and SOA minimum field for the resolver to stop the assumption. Make sure the TTL and SOA minimum field changes are effective.Use GetChange to ensure that your changes have been propagated to all Route 53 DNS servers. Enabling DNSSEC signing at Route 53 Click on Enable DNSSEC signing at the DNSSEC signing tab, in the hosted zone console. Choose to create a customer-managed CMK Create KSK and enable signing After enabling DNSSEC, click on View Information to Create a DS Record. Check on Establish a chain of trust -> Another Domain Registrar section. Go Daddy configuration steps Go to Domain Portfolio -> Domain Settings for your domain and select DNSSEC. Create a new DS record with the following information: Key Tag: Key Tag in AWS Algorithm: Signing Algorithm Type in AWS Digest Type: Digest Algorithm Type in AWS Digest: Digest in AWS Testing To check if the new configuration is properly set up and the DNS is answering as expected: dig dnskey +dnssec We should receive two DNSKEYs (one for ZSK and another for KSK) and a signed resource record (RRSIG), confirming that the DNS servers are successfully using DNSSEC. To check the chain of trust with the TLD: dig com NS +short The answer should retrieve the TLD server name dig DS +short To make sure we get the DS record for the journeytrack domain from TLD. You should get the DS record shown in the DNSSEC recommendations to create the record. dig A +dnssec To check if the resource record is set with signatures. Answers must return A and RRSIG info. dig DNSKEY journeytrack.io +short To validate the DS public key Rollback If any problem or issue arises during the implementation, DNSSEC can be easily reverted: Disable DNSSEC from go Daddy and Route53 Restore SOA changes Undo NS TTL changes Lourdes Dorado Cloud Engineer Teracloud

  • How to configure ArgoCD OIDC with Google Workspace in 5 simple steps

    There are different ways to handle authentication in ArgoCD, but indeed using the admin password is not secure enough. For this reason, we’ll learn how to configure your ArgoCD to integrate with Google Workspace for Login. In this TeraTip we’ll cover one of the approaches for authentication, using ‌groups from Google Workspace. Before you get started… In order to get the SSO working you need to have the SSL and URL for your server already configured, otherwise, you’ll get errors during the authentication. Step #1: Create the OAuth Screen First, you create a project with any name you want and configure the OAuth screen as follows: In the Authorized domains section, it is important to configure the domain for the email your users have, in this case, we add the domain for our organization. Finally, on the Scopes tab select the userinfo.profile and the openid scopes. Those are the scopes ArgoCD needs for the log in. Step #2: Create the OAuth Client ID On the Credentials tab, click on + Create Credentials and OAuth client ID. Then select on Application type, Web Application, and configure the JavaScript origins and redirect URIs. In the Authorized JavaScripts origins section, configure the root URL for your ArgoCD. Then in Authorized redirect URIs copy this URL but append the /api/dex/callback path. Then click on create and save your Client ID and Client Secret for later. Step #3: Configure the Service Account on Google Workspace Now create the Service Account and configure the Domain Wide delegation, in order to make ArgoCD able to read the groups. On the Service Account section of the Google Console we click on + CREATE SERVICE ACCOUNT. You only need to enter a name for the service account and enter any name you like. Enter ‌your service account, go to the Keys tab, click on Add Key, and select JSON as format. Save the keys, we will use them later for configuring the OIDC. Step #4: Set up Domain Wide delegation and enabling Admin SDK To close with the Google configuration you’ll now have to configure Domain Wide delegation and enable the Admin SDK. First head to the Google Cloud Admin console, and then go to Security, Access and data control, API controls, and, lastly, then click on manage domain wide delegation. Click on Add Client, and then on Client ID paste the Client ID of your service account, and on the scopes section paste this: https://www.googleapis.com/auth/admin.directory.group.readonly Finally, head to https://console.cloud.google.com/apis/library/admin.googleapis.com and enable the Admin SDK for your project. Step #5: Configure ArgoCD To configure the OIDC create two secrets on your cluster, one for the Client Secret we got on Step 2 and one for the JSON we got on Step 3. For the client secret: For the JSON: Now if you are using the ArgoCD Helm Chart, you can use the following values, tested on version 5.27.1: Now you have your ArgoCD configured with Google SSO! Juan Wiggenhauser Cloud Engineer Teracloud

  • Security announcements at AWS Re: Invent 2023

    AWS re:Invent is AWS’s end-of-the-year event where the latest developments of AWS Cloud microservices are announced. Our team had the pleasure of attending talks with the most important announcements for what’s next in Cloud Security and the following is their shortlist. Access analyzer 1) Custom policy checks powered by automated reasoning. Custom policy checks to validate that IAM policies adhere to your security standards ahead of deployments. It uses the power of automated reasoning—security assurance backed by mathematic proof-. To detect nonconformant updates to policies Easy to integrate into CI/CD pipelines 2) Simplified inspecting unused access to guide you toward the least privilege. IAM Access Analyzer continuously analyzes your accounts to identify unused access and creates a centralized dashboard with findings. The findings highlight unused roles, unused access keys for IAM users, and unused passwords for IAM users. The findings provide visibility into unused services and actions for active IAM roles and users. Security Hub 1) Customized security controls. Security teams can now refine the best practices monitored by Security Hub controls to meet more specific security expectations,  with your specific password policies, retention frequencies, or other attributes. 2) Major dashboard enhancements. New data visualizations, filtering, and customization enhancements. You can now filter and customize your dashboard views, as well as view a new set of widgets that were carefully chosen to reflect the modern cloud security threat landscape and relate to potential threats and vulnerabilities in your AWS cloud environment. The new filtering functionality allows you to filter the Security Hub dashboard by account name and ID, resource tag, and product name, such as Amazon GuardDuty or Amazon Inspector, Region, severity, and application. You can also choose which widgets will appear in the dashboard, and customize their position and size. 3) Findings enrichment. Metadata enrichment for findings aggregated in AWS Security Hub allows you to contextualize better, prioritize, and take action on your security findings. This enrichment adds resource tags, a new AWS application tag, and account name information to every finding ingested into Security Hub, including findings from AWS security services such as Amazon GuardDuty, Amazon Inspector, and AWS IAM Access Analyzer, as well as a large and growing list of AWS Partner Network (APN) solutions. Eliminates the need to build data enrichment pipelines or manually enrich metadata of security findings. It also makes it easier to fine-tune findings for automation rules, search or filter findings and insights, and assess security posture status by application in Security Hub widgets, and in related AWS applications. 4) New central configuration capabilities. Centrally enable and configure Security Hub standards and controls across accounts and Regions in just a few steps. Use the Security Hub central configuration to address gaps in your security coverage by creating security policies with your desired standards and controls and applying them in selected Regions across accounts and Organizational Units (OUs). Set the Security Hub delegated administrator (DA) for all Regions at once, and then view and configure the cloud security posture management capabilities, such as standards and controls, for all or some accounts globally, without needing to update them account-by-account and Region-by-Region. Secret Manager 1) Supports batch retrieval of secrets. A single API call to identify and retrieve a group of secrets for your application. With the BatchGetSecretValue, you can input a list of secret names, ARNs, or filter criteria, such as tags. The API returns a response for all secrets meeting the criteria in the same format as the existing GetSecretValue API. This allows you to optimize your workloads while reducing the number of API calls. Amazon Detective 1) Supports security investigations for Amazon GuardDuty ECS. 2) Runtime Monitoring. Enhanced visualizations and additional context for detections on ECS. Use the new runtime threat detections from GuardDuty and the investigative capabilities from Detective to improve your detection and response for potential threats to your container workloads. 3) Log retrieval from Amazon Security Lake. Integrates with Amazon Security Lake, enabling security analysts to query and retrieve logs stored in Security Lake. To get additional information from AWS CloudTrail logs and Amazon Virtual Private Cloud (Amazon VPC) Flow Logs stored in Security Lake while conducting security investigations in Detective. 4) Investigations for IAM. Automatically investigates AWS Identity and Access Management (IAM) entities for indicators of compromise (IoC). It helps security analysts determine whether IAM entities have potentially been compromised or involved in any known tactics, techniques, and procedures (TTP) from the MITRE ATT&CK framework. There is no additional charge for this new capability, and it’s available for all existing and new Detective customers. Amazon GuardDuty 1) Runtime monitoring for Amazon EC2. It gives you visibility into on-host, and operating system–level activities and provides container-level context into detected threats. Compatible with AWS Organizations 2) ECS Runtime Monitoring, including AWS Fargate. Expansion of Amazon GuardDuty that introduces runtime threat detection for Amazon Elastic Container Service (Amazon ECS) workloads—including serverless container workloads running on AWS Fargate. It gives you visibility into on-host and operating system-level activities. It provides container-level context into detected threats, such as containers repurposed for cryptocurrency mining or unusual activity indicating unauthorized code execution on your container. AWS Analytics 1) Simplified users’ data access across services with the IAM Identity Center. Use trusted identity propagation with AWS IAM Identity Center to manage and audit access to data and resources based on user identity. Available to customers accessing AWS data sources through Amazon Quicksight, EMR Studio, and Redshift Query Editor; supported third-party tools and applications; and S3 Access Grants. In big data environments managed by Amazon EMR, trusted identity propagation is available for EMR on EC2. It interacts with authorization engines, including Amazon Redshift, Lake Formation, and S3 Access Grants, and propagates the user’s identity to the data source, Amazon Redshift or S3. Amazon Inspector 1) Agentless vulnerability assessments for Amazon EC2 in preview. Continuous monitoring of your Amazon EC2 instances for software vulnerabilities without installing an agent or additional software. You can expand your vulnerability assessment coverage across your EC2 infrastructure with Amazon Inspector agentless scanning for EC2 instances that do not have SSM Agents installed or configured. Amazon Inspector takes snapshots of EBS volumes to collect software application inventory from the instances to perform vulnerability assessments. 2) Request a Cyber Insurance Quote from an AWS Cyber Insurance Competency Partner. Customers can receive cyber insurance pricing estimates, purchase plans, and be confident they have the coverage for security and recovery services when needed most. Customers leverage an AWS Security Hub assessment scanning against the AWS Foundational Best Practices Framework and deliver the assessment results to insurance providers. Customers with a security posture that follows AWS best practices achieve rewards similar to “safe-driver” discounts. 3) AWS Built-in Competency Partner software automates Installation for customers. AWS Built-in software uses a well-architected Modular Code Repository (MCR) designed to add value to partner software solutions. Building blocks called Cloud Foundational Services across multiple domains such as identity, security, and operations. Final thoughts AWS re:Invent 2023 has not only redefined the benchmarks for cloud security but has also set a new standard for collaboration between cloud providers, security solutions, and insurance services. These advancements collectively contribute to fostering a more secure, efficient, and responsive cloud computing landscape. Lourdes Dorado Cloud Engineer Teracloud f you want to know more about AWS re:Invent 2023, we suggest going check Monitoring Updates at AWS Re:Invent 2023

  • Monitoring Updates at AWS Re:Invent 2023

    Welcome to our recap of the exciting monitoring announcements made during the AWS Re:Invent 2023 event in Las Vegas! 1. Natural Language Query in Amazon CloudWatch In an exciting advancement, AWS has introduced a natural language query feature for Amazon CloudWatch. Now you can make more intuitive and expressive queries across logs and metrics. This makes it easier to extract valuable information from your logs and metrics. https://aws.amazon.com/blogs/aws/use-natural-language-to-query-amazon-cloudwatch-logs-and-metrics-preview/ 2. Amazon Managed Service for Prometheus Collector The new feature "Amazon Managed Service for Prometheus Collector" is here to simplify metric collection in Amazon EKS environments. The highlight is metric collection without the need for additional agents. Interested in simpler management of your metrics in EKS? This is a must-read. https://aws.amazon.com/blogs/aws/amazon-managed-service-for-prometheus-collector-provides-agentless-metric-collection-for-amazon-eks/ 3. Metric Consolidation with Amazon CloudWatch In an effort to address hybrid and multicloud challenges, AWS has introduced a new capability for Amazon CloudWatch. You can now consolidate your metrics from hybrid, multicloud, and on-premises environments in one place. This provides a more comprehensive view and makes resource management easier. https://aws.amazon.com/blogs/aws/new-use-amazon-cloudwatch-to-consolidate-hybrid-multi-cloud-and-on-premises-metrics/ Conclusion These advancements enhance user experience, simplify operations, and offer a consolidated perspective across diverse cloud setups. Exciting times lie ahead in the landscape of AWS monitoring! Martín Carletti Cloud Engineer Teracloud

  • What C levels must know about their IT in the age of AI

    A recent comprehensive survey by Cisco underscores a critical insight: the majority of businesses are racing against time to deploy AI technologies, yet they confront significant gaps in readiness across key areas. This analysis, drawn from over 8,000 global companies, reveals an urgent need for enhanced AI integration strategies. See the original survey at Cisco global AI readiness survey, but if you want to know how to apply this information in your business today, keep reading. Key Findings Practical Steps for AI Integration Final Thoughts Key Findings - 97% of businesses acknowledged increased urgency to deploy AI technologies in the past six months. - Strategic time pressure: 61% believe they have a year at most to execute their AI strategy to avoid negative business impacts. - Readiness gaps in strategy, infrastructure, data, governance, talent, and culture, with 86% of companies not fully prepared for AI integration. The report highlights an AI Readiness Spectrum to categorize organizations: 1. Pacesetters: Leaders in AI readiness 2. Chasers: Moderately prepared 3. Followers: Limited preparedness 4. Laggards: Significantly unprepared This classification mirrors our approach at Teracloud using the Datera Data Maturity Model (D2M2) which we use to guide our customers towards data maturity and AI readiness. Practical Steps for AI Integration Let’s explore some recommendations that will help prepare your organization for the AI era. Develop a Robust Strategy - Prioritize AI in your business operations. The urgency is evident, with a substantial majority of businesses feeling the pressure to adopt AI technologies swiftly. - Create a multi-faceted strategy that addresses all key pillars simultaneously. You can use our D2M2 framework and cover all your bases. Alternatively, you can base your strategy on the generic AWS Well-Architected Framework Ensure Data Readiness - Recognize the critical role of 'AI-ready' data. Data serves as the AI backbone, yet it’s often the weakest link, not because we don't have data but because it isn’t accessible. - Tackle data centralization issues to leverage AI's full potential. Using cloud tools you can still have the information scattered. Consume it using a single endpoint, for instance using Amazon Athena and other data-at-scale tools. - Facilitate seamless data integration across multiple sources. Employing tools like AWS Glue can help in automating the extraction, transformation, and loading (ETL) processes, making diverse data sets more cohesive and AI-ready. Upgrade Infrastructure and Networking - To accommodate AI's increased power and computing demands, over two-thirds (79 percent) of companies will require further data center graphics processing units (GPUs) to support current and future AI workloads. - AI systems require large amounts of data. Efficient and scalable data storage solutions, along with robust data management practices, are essential. - Fast and reliable networking is necessary to support the large-scale transfer of data and the intensive communication needs of AI systems. - Enhance IT infrastructure to support increasing AI workloads. - Focus on network adaptability and performance to meet future AI demands. Implement Robust Governance and security - Develop comprehensive AI policies, considering data privacy, sovereignty, bias, fairness, and transparency. - AI-related regulations are evolving. A flexible governance strategy allows the organization to quickly adapt to new laws and standards. - A solid governance framework is necessary to ensure AI is used ethically and responsibly, adhering to ethical guidelines and standards. - Prioritize data security and privacy. Utilize AWS’s comprehensive security tools like AWS Identity and Access Management (IAM) and Amazon Cognito to safeguard sensitive data, a crucial aspect when deploying AI applications. Focus on Talent Development - Address the digital divide in AI skills. While most companies plan to invest in upskilling, there's skepticism about the availability of talent. - Emphasize continuous learning and skill development. Cultivate a Data-Centric Culture - Embrace a culture that values and understands the importance of data for AI applications. - Address data fragmentation: Over 80% of organizations face challenges with siloed data, a major impediment to AI effectiveness. Understanding these findings is just the first step. Implementing them requires a strategic approach, one that we champion through our Datera Data Maturity Model (D2M2). Our model not only aligns with Cisco's categorizations but also offers a roadmap for businesses to evolve from AI Followers to Pace setters. For a deeper dive into the Cisco survey, access the full report: Cisco Global AI Readiness Survey. To know more about how Teracloud helps its customers enter the Generative AI era, please contact us. Final Thoughts Adopting AI is no longer optional but a necessity for competitive advantage. By focusing on the six pillars of AI readiness, companies can transform challenges into opportunities, steering towards a future where AI is not just an ambition but a tangible asset driving business success. Carlos José Barroso Head of DataOps Teracloud If you want to know more about Generative AI with AWS, contact us at info@teracloud.io. If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Get your first job in IT with AWS Certifications

    Could you land your first job with just AWS certifications and no experience at all? Almost… but not exactly. The following explores how helpful an AWS Certification is when landing your first job in IT, and why it’s so important not to fall for the “only certifications will guarantee you a job” trap. An AWS certification is a professional credential offered by Amazon Web Services (AWS) that validates an individual's knowledge and expertise in various AWS cloud computing services and technologies. These certifications are designed to demonstrate a person's proficiency in using AWS services and solutions to design, deploy, and manage cloud-based applications and infrastructure. It's proof that you know how to use Amazon Web Services and understand cloud concepts. That said, one could deduce that obtaining these certifications is a really good way to demonstrate your knowledge, and stand out among your peers. But is that all? AWS Partners would disagree. What are AWS Partners? AWS partners are organizations that collaborate with AWS to offer a wide range of services, solutions, and expertise related to AWS cloud computing. AWS partners come in various forms and play critical roles in helping businesses leverage AWS services to meet their unique needs. In other words, partners are companies that help AWS implement their services. You have different partner tiers: AWS Select Tier Services Partners AWS Advanced Tier Services Partners AWS Premier Tier Services Partners The equation is really simple: The more qualified you are, the more clients you get. The more clients you get, the more money the company makes. Therefore it’s in an AWS Partner's best interest to become more specialized, and that's where certifications come into play. To become a specialized partner among other things, you need technical certified individuals. As you can see, being an AWS Premier Partner, companies require 25 individuals be certified. And that’s why having a certification becomes really valuable, even more if it’s a Professional or Specialty one. Other Benefits There are even badges for how many certifications a partner has, which give more credibility to the provided service. There are other benefits as a partner such as being eligible to earn credits for the client. That means, receiving hundreds or even thousands in financing through credits for you to offer your clients. Final thoughts To sum up, if you don’t have any experience at all, having an AWS Certification will really help you to obtain interviews and if you combine the knowledge acquired with real case scenarios you’ll be closer to landing your dream job. If on the other hand you only obtain the certification yet don’t have any practical abilities or field work, the certificate won’t really help at all. Strategize. Find companies that are AWS Partners and apply to them. They’re looking for technical individuals and you’re looking for real case scenarios. It’s in real-life Cloud challenges where you actually get to apply your knowledge and ultimately gain the confidence and proof you’ll need to continue developing your professional skills. Ignacio Bergantiños Cloud Engineer Teracloud If you want to know more about AWS, we suggest going check How to apply for Amazon's Service Delivery Program (SDP) If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to protect your SSH and SCP Connections with AWS Sessions Manager in 4 simple steps

    In certain scenarios, establishing secure SSH or SCP connections with EC2 instances within our protocol becomes necessary. AWS Sessions Manager offers a robust solution to accomplish this, allowing us to avoid the exposure of critical ports and enhance overall security. Step# 1: Install the latest version of the AWS CLI and the AWS Sessions Manager plugin To begin, install the latest versions of the AWS CLI and the Sessions Manager plugin. The following links provide detailed instructions for installation: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html Step# 2: Modify ssh config file Locate your SSH config file, which can be found at "~/.ssh" for Linux and Mac distributions, or "C:\Users.ssh" for Windows. Add the following line to the config file: host i-* mi-* ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'" Step# 3: Configure the SSM instance and the EC2 instance profile of your instances Follow the SSM agent installation instructions provided in the documentation: https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-manual-agent-install.html In my case, I’m installing it on an Ubuntu machine with the following commands: sudo snap install amazon-ssm-agent --classic sudo snap list amazon-ssm-agent Additionally, attach the AmazonSSMManagedInstanceCore policy to the EC2 instances you wish to access, ensuring the necessary permissions for AWS Systems Manager service core functionality. Step# 4: Start SSH/SCP session in your local environment Before initiating SSH/SCP sessions with your EC2 instances, specify your AWS Profile or the region of the EC2 instances if you are using temporal credentials using the following command: export AWS_REGION= export AWS_PROFILE= # ssh command ssh -i id_rsa ubuntu@i-xxxxxxxxx # scp command scp -i id-rsa ubuntu@i-xxxxxxxxx:/ By following these steps, you can confidently protect your SSH and SCP connections using AWS Sessions Manager. This comprehensive guide empowers you to establish secure access while minimizing potential security risks. Happy coding and see you next time, in the Cloud! Juan Bermudez Cloud Engineer Teracloud If you want to know more about Cloud Security, we suggest going check Best Security Practices, Well-Architected Framework If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to apply for Amazon's Service Delivery Program (SDP)

    Amazon's Service Delivery Program (SDP) presents an exciting opportunity for service providers looking to work with one of the world's most influential tech giants. By joining the SDP, companies can establish strong relationships with Amazon Web Services (AWS) and access a global audience. However, the competition is fierce, and preparation is key to standing out in the application process. In this guide, we will explore essential tips and considerations for successfully applying to Amazon's SDP. 1. Understand the Program Requirements Before you embark on your journey to apply for Amazon's Service Delivery Program (SDP), it's crucial to have a comprehensive understanding of the program's requirements. These requirements serve as the foundation for your application, ensuring that you align with Amazon's expectations and can provide the level of service they seek. Here's an expanded breakdown of what this entails: Technical Expertise: Amazon's SDP is geared towards service providers who possess a deep understanding of Amazon Web Services (AWS). This means you should have a proven track record of working with AWS technologies, deploying solutions, and managing AWS resources effectively. Your technical expertise should extend to various AWS services and use cases. Certifications: AWS certifications are a testament to your knowledge and proficiency in AWS. Depending on the specific services you plan to deliver as part of the SDP, having relevant certifications can significantly bolster your application. Certifications demonstrate your commitment to continuous learning and your ability to stay updated with the latest AWS developments. Referenceable Clients: References from satisfied clients can be a powerful asset in your application. These references should be able to vouch for your capabilities, professionalism, and the positive impact your services have had on their AWS environments. Having a diverse range of referenceable clients from various industries can demonstrate your versatility and ability to adapt to different contexts. Business Practices: Amazon values partners who uphold high standards of business ethics and professionalism. Your company's business practices, including responsiveness, communication, and customer-centric approaches, should align with Amazon's values. A strong reputation in the industry for integrity and reliability can enhance your application's credibility. AWS Partnership Tier: Depending on the tier of partnership you aim to achieve within the SDP, there might be specific requirements to fulfill. Higher partnership tiers often require a deeper level of engagement with AWS, which could include meeting revenue targets, demonstrating a significant number of successful customer engagements, and showing a commitment to driving AWS adoption. 2. Demonstrate AWS Expertise As you navigate the application process for Amazon's Service Delivery Program (SDP), highlighting your expertise in Amazon Web Services (AWS) is a fundamental aspect that can set your application apart. Demonstrating your in-depth understanding of AWS technologies and your ability to leverage them effectively is key. Here's a comprehensive exploration of how to effectively showcase your AWS expertise: Project Portfolio: Provide a detailed portfolio of projects that showcases your hands-on experience with AWS. Highlight a variety of projects that demonstrate your proficiency across different AWS services, such as compute, storage, networking, security, and databases. Include project descriptions, the challenges you addressed, the solutions you implemented, and the outcomes achieved. Architectural Excellence: Describe how you've designed AWS architectures to meet specific business needs. Explain the decision-making process behind architecture choices, scalability considerations, fault tolerance measures, and security implementations. Highlight instances where your architectural decisions led to optimized performance and cost savings. Use Cases: Illustrate your familiarity with a range of AWS use cases. Detail scenarios where you've successfully deployed AWS solutions for tasks like application hosting, data analytics, machine learning, Internet of Things (IoT), and serverless computing. Showcase your ability to tailor AWS services to diverse client requirements. Problem Solving: Provide examples of how you've troubleshooted and resolved complex issues within AWS environments. Discuss instances where you identified bottlenecks, optimized performance, or resolved security vulnerabilities. This demonstrates your ability to handle real-world challenges that can arise during service delivery. AWS Best Practices: Emphasize your adherence to AWS best practices in terms of security, compliance, performance optimization, and cost management. Discuss how you've implemented well-architected frameworks and followed AWS guidelines to ensure the reliability and scalability of your solutions. 3. Focus on Innovation and Quality Amazon seeks partners who not only meet standards but also bring innovation and quality to the table. In your application, showcase how your company adds unique value through innovative approaches and how you ensure quality in every service you offer. Continuous Improvement: Highlight your commitment to continuous improvement in your services. Describe how you actively seek feedback from clients and incorporate their input to refine and enhance your solutions. Emphasize your agility in adapting to changing client needs and industry trends. Metrics of Success: Quantify the success of your innovative solutions with relevant metrics. If your solution improved performance, reduced costs, or increased revenue for your clients, provide specific numbers and percentages to highlight the tangible impact of your work. Quality Assurance: Describe your quality assurance processes and methodologies. Explain how you ensure that your solutions meet the highest standards in terms of functionality, security, and performance. Highlight any certifications, industry standards, or best practices you adhere to. Collaboration with Clients: Showcase instances where you collaborated closely with clients to co-create innovative solutions. Discuss how you facilitated workshops, brainstorming sessions, and prototyping activities to bring their ideas to life while adding your expertise. 4. Prepare Strong References Solid references from past clients are a vital component of your application. Select references that can vouch for your capabilities and achievements in delivering AWS services. Make sure you have authentic testimonials that highlight your professionalism and skills. 5. Articulate Your Value Proposition Clearly explain why your company is the right choice for the SDP. What makes your approach unique? How will your collaboration benefit Amazon and AWS customers? Articulate your value proposition concisely and convincingly. 6. Preparation and Detailed Review Thorough preparation and meticulous review are crucial steps in the application process for Amazon's Service Delivery Program (SDP). Any grammatical errors or inaccuracies in your application could impact the impression you make on Amazon's evaluators. Here's a detailed exploration of how to approach these aspects: Organized Structure: Organize your application coherently and clearly. Divide your content into distinct sections such as past experience, value proposition, project examples, and references. Use headers and bullet points to enhance readability and highlight key points. Relevant Content: Ensure that each section of your application is relevant to the requirements of the SDP. Avoid including redundant information or content that does not directly contribute to demonstrating your experience and capability to deliver quality services on AWS. Accurate Information: Verify that all provided information is accurate and up-to-date. Including incorrect or outdated information can affect the credibility of your application. Exemplary Stories: In the past experience section, choose project stories that exemplify your achievements and capabilities. Provide specific details about challenges you faced, how you overcame them, and the tangible results you achieved. Professional Language: Maintain a professional and clear tone throughout your application. Avoid unnecessary jargon or overly technical language that might hinder understanding for evaluators who may not be experts in all technical areas. Reflection and Context: Don't just list achievements, but also provide context and reflection on your experience. Explain why certain projects were challenging or why you chose specific approaches. This demonstrates your ability to think critically and learn from experiences. Grammatical Review: Carefully review your application for grammatical and spelling errors. A professionally written and well-edited application showcases your attention to detail and seriousness. Consistent Formatting: Maintain consistent formatting throughout the application. Use the same font, font size, and formatting style throughout the document to create a coherent and professional presentation. External Feedback: Consider asking colleagues or mentors to review your application. Often, an extra set of eyes can identify areas for improvement that you might have overlooked. Deadlines and Submission: Ensure you meet the deadlines set by Amazon and submit your application according to the provided instructions. Applying for Amazon's SDP is an exciting opportunity but requires careful planning and preparation. By following these tips and considerations, your application will be well on its way to standing out among competitors and establishing a strong partnership with Amazon Web Services. Remember that authenticity, AWS expertise, and a clear value proposition are key elements to impressing in the selection process. Best of luck in your application to Amazon's SDP! For more info: https://aws.amazon.com/partners/programs/service-delivery/?nc1=h_ls Julian Catellani Cloud Engineer Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • Secure Your Data with SOC 2 Compliant Solutions

    In today's digital landscape, where data breaches and cyber threats have become increasingly sophisticated, protecting sensitive information is of paramount importance. One effective approach that organizations are adopting to ensure the security of their data is by implementing SOC 2-compliant solutions. In this article, we'll delve into what SOC 2 compliance entails, its significance for safeguarding data, and how businesses can benefit from adopting such solutions. Table of Contents Understanding SOC 2 Compliance Key Components of SOC 2 Compliance Who Needs SOC 2 Compliance? In an era where data breaches can lead to devastating financial and reputational losses, companies must adopt robust strategies to safeguard their sensitive information. SOC 2 compliance offers a comprehensive framework that helps organizations fortify their data security measures. By adhering to the SOC 2 standards, companies can not only protect themselves from potential cyber threats but also gain a competitive edge in the market. Understanding SOC 2 Compliance What is SOC 2? SOC 2, or Service Organization Control 2, is a set of stringent compliance standards developed by the American Institute of CPAs (AICPA). It focuses on the controls and processes that service providers implement to ensure the security, availability, processing integrity, confidentiality, and privacy of customer data. Unlike SOC 1, which assesses financial controls, SOC 2 is geared towards evaluating the effectiveness of a company's non-financial operational controls. Why is SOC 2 Compliance Important? SOC 2 compliance is crucial because it reassures customers, partners, and stakeholders that a company has established rigorous security measures to protect sensitive data. As data breaches continue to make headlines, consumers are becoming more cautious about sharing their information with businesses. SOC 2 compliance demonstrates a commitment to data protection, enhancing trust and credibility. Key Components of SOC 2 Compliance Security Security is a foundational component of SOC 2 compliance. It involves implementing safeguards to protect against unauthorized access, data breaches, and other security threats. This includes measures such as multi-factor authentication, encryption, and intrusion detection systems. Availability Businesses must ensure that their services are available and operational when needed. SOC 2 compliance assesses the measures in place to prevent and mitigate service interruptions, ranging from robust infrastructure to disaster recovery plans. Processing Integrity Processing integrity focuses on the accuracy and completeness of data processing. Companies must have controls in place to ensure that data is processed correctly, preventing errors and unauthorized modifications. Confidentiality Confidentiality revolves around protecting sensitive information from unauthorized disclosure. This includes customer data, intellectual property, and other confidential information. Privacy Privacy controls are vital for businesses that handle personally identifiable information (PII). SOC 2 compliance evaluates whether a company's practices align with relevant data privacy regulations. Who Needs SOC 2 Compliance? SaaS Companies Software-as-a-Service (SaaS) companies often handle a vast amount of customer data. Achieving SOC 2 compliance is essential for building trust and attracting clients concerned about the security of their data. Cloud Service Providers Cloud service providers store and process data for various clients. SOC 2 compliance demonstrates their commitment to ensuring the security, availability, and privacy of customer data. Data-Centric Businesses Companies that rely heavily on data, such as e-commerce platforms or healthcare providers, need SOC 2 compliance to protect customer information and maintain legal requirements. Stay tuned for the rest of the article, where we will delve deeper into achieving SOC 2 compliance, its benefits, and its challenges, as well as a comparison with other compliance frameworks. Paulo Srulevitch Content Creator Teracloud If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

  • How to integrate Prometheus in an EKS Cluster as a Data Source in AWS Managed Grafana

    Whether you're an experienced DevOps engineer or just starting your cloud journey, this article will equip you with the knowledge and tools needed to effectively monitor and optimize your EKS environment. Objective Configure and use Prometheus to collect metrics on an Amazon EKS cluster and view those metrics in AWS Managed Grafana (AMG). Provide usage instructions and an estimate of the costs of connecting Prometheus metrics as an AMG data source. Let’s assume that fluentbit is already configured on the EKS cluster. Step #1: Prometheus Configuration Ensure Prometheus is installed and running in your Amazon EKS cluster. You can install via Terraform using helm chart. Verify that Prometheus is successfully collecting metrics from your cluster nodes and applications. Step #2: Configure the data source in Grafana Now you’ll need to configure the data source in Grafana. (Here the created LoadBalancer will serve as a reference) Make sure the AWS Route53 console is open, and a private Hosted Zone named "monitoring.domainname" is created. Inside this Hosted Zone, create an Alias record pointing to the LoadBalancer previously mentioned. This data will be used to configure the Prometheus service as the data source in AMG. AWS Managed Grafana Configuration Provision an instance of AWS Managed Grafana. Access the AWS Managed Grafana console and obtain the URL to access the Grafana instance. Ensure you have the necessary permissions to manage data sources in AWS Managed Grafana. Configure Prometheus as a Data Source in AWS Managed Grafana: Access the AWS Managed Grafana console using the URL obtained in step 2. Navigate to the "Configuration" section and select "Data sources". Click on "Add data source" and choose "Prometheus" as the data source type. Complete the required fields, including the Prometheus endpoint URL and authentication credentials if applicable, or a Workspace IAM Role. Save the data source configuration. Visualizing Metrics in Grafana: Identify the KPIs needed to visualize in the dashboard Create dashboards in Grafana to visualize the metrics collected by Prometheus. Utilize Grafana's query and visualization options to create customized visualizations of your metrics. Explore different panel types such as graphs, tables, and text panels to present the information in a clear and understandable manner. Step #3: Estimate costs To estimate the costs associated with integrating Prometheus as a data source in AWS Managed Grafana, consider the following: AWS Managed Grafana Costs: Refer to the AWS documentation to understand the details and pricing associated with AWS Managed Grafana. According to the following documentation, the price is per license, either editor or user. The editor can create and edit both the workspace and the metric display and the user can only view the panels and metrics previously configured by the editor. (https://aws.amazon.com/es/grafana/pricing/). Today, the editor license cost $9 and the user license cost $5 Storage Costs: If AWS Managed Grafana utilizes additional storage to store metrics collected by Prometheus, refer to the AWS documentation for information on pricing and available storage options. Remember that costs may vary depending on your specific configuration and the AWS region where your AWS Managed Grafana instance is located. Consult the documentation and updated pricing details for accurate cost estimation. Final thoughts In conclusion, this is a very interesting and easy-to-implement alternative to replace solutions in clusters that have a large number of running pods. This scenario generates an even larger number of metrics, and that's where this processing-based and licensed solution becomes much more cost-effective compared to a metric-based pricing model. Martín Carletti Cloud Engineer Teracloud If you want to know more about EKS, we suggest going check Cross account access to S3 using IRSA in EKS with Terraform as IaaC If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇

bottom of page