top of page

150 items found for ""

  • AI and its inescapable Transformation

    In fairytales, the genie bursts forth from the lamp, granting wishes but often leading to unforeseen consequences. Today, a similar genie has been unleashed – the genie of Artificial Intelligence (IA). Unlike the fictional genie, however, there's no putting this one back in the bottle. Generative IA is here to stay, and its impact on every facet of our lives is inevitable. The Incremental March of AI Forget about robots taking over the world overnight. IA's infiltration will be a gradual one, a slow burn rather than an explosion. It's already begun with tasks involving data analysis and pattern recognition. Algorithmic trading in finance relies heavily on IA to analyze market trends and make lightning-fast decisions. Customer service is another battleground, with chatbots powered by Generative IA in the cloud providing first-line support and resolving basic inquiries. at scale These are just the opening salvos. The Acceleration of Automation The story doesn't end with basic tasks. As IA capabilities grow exponentially, its influence will accelerate. Tasks that once required human judgment and expertise will become increasingly automated. Doctors might utilize IA-powered diagnostics to identify diseases with higher accuracy. Lawyers could leverage IA for legal research and document analysis, streamlining the legal process. The line between human and machine intelligence will continue to blur. The Limitless Landscape of AI The impact of IA extends far beyond specific industries. Imagine AI that can not only analyze data but also generate creative content. We could see AI-powered design tools that craft innovative products or compose captivating music. Scientific discovery could be revolutionized by AI that analyzes vast datasets and proposes groundbreaking hypotheses. Even social interaction might be reshaped by IA companions capable of offering emotional support and personalized advice. The possibilities are truly limitless. A Low Probability of Roadblocks Some fear a technological singularity – the point where IA surpasses human intelligence and becomes uncontrollable. While that remains a theoretical possibility, the road to singularity is likely paved with steady progress, not sudden leaps. The ongoing advancements in machine learning models like Large Language Models (LLMs) are a testament to this. These complex algorithms are already demonstrating remarkable abilities in areas like language processing and knowledge acquisition. The Power of LLMs LLMs are essentially digital brains trained on massive amounts of text data. They can generate human-quality text, translate languages, write different kinds of creative content, and answer your questions in an informative way. These capabilities translate to real-world applications. LLMs have the potential to automate a significant portion of current knowledge-based jobs – estimates suggest as much as 80%. Repetitive tasks like data entry, report generation, and even some aspects of coding could be handled by LLMs, freeing up human workers for more strategic and creative endeavors. In summary The genie of IA is out of the bottle, and its influence on our lives is undeniable. This isn't a cause for alarm, but rather a call to action. The future belongs to those who can adapt and learn alongside IA. By embracing lifelong learning and developing new skillsets, we can ensure we not only survive but thrive in this new era. The time to explore how IA can benefit you is now. So, what are you waiting for? Start exploring the potential of IA and see how it can transform your work and your world. Carlos Barroso Head of AI Teracloud

  • Teracloud Boosts Fintech Client Onboarding with Cloud and AI

    The financial technology (Fintech) landscape is fiercely competitive, with new players constantly emerging. In this environment, where speed and efficiency reign supreme, attracting and retaining customers requires a frictionless onboarding experience. At Teracloud, we recently partnered with a prominent fintech company that specializes in crafting personalized investment plans for individuals of all financial backgrounds. Our client's primary objective was to streamline their onboarding process. They aimed to expedite the delivery of personalized investment recommendations. With that done, they’d empower their clients to embark on journeys toward financial security at an accelerated pace. Problem The client's current onboarding process was mired in inefficiency, hindering both customer acquisition and satisfaction. It relied heavily on manual data collection through lengthy, cumbersome forms. This approach presented several challenges. Strict regulations and internal compliance rules required meticulous data gathering, which then led advisors to spend a significant amount of time – up to two days per client – to compile a complete picture. This not only slowed down the process considerably but also limited the company's capacity to onboard new clients. Furthermore, the complexity of the forms resulted in a lot of incomplete or inaccurate entries. This data inconsistency directly impacted the quality of the personalized investment recommendations generated, ultimately affecting customer satisfaction. Perhaps the most detrimental consequence was the high abandonment rate. Faced with the daunting task of data entry, many potential customers simply gave up midway through the onboarding process. This not only impacted the company's revenue stream but also limited the overall value proposition of its personalized investment plans. Solution Teracloud implemented a two-part solution using cloud technology and AI to transform the client's onboarding experience. We built a custom chat tool using a cloud framework to collect customer data and conversationally assess risk profiles. This made the process more natural and user-friendly, significantly reducing the number of customers who abandoned the process. The same tool checked the collected data for completeness and accuracy, eliminating the need for manual review at this stage. Internally, a second chat tool powered by AI used the client's knowledge base to create a draft investment recommendation based on the collected data. This recommendation came with clear explanations from the knowledge base, promoting transparency and trust. Advisors then worked with the chat tool to refine or correct the recommendations in a step-by-step process. This approach leveraged the chat tool's speed while maintaining human oversight for compliance purposes. Results Teracloud's AI solution has improved the onboarding process, delivering a series of impactful enhancements. The error-prone nature of manual data entry has been significantly mitigated, with the rate of incomplete or inaccurate data dropping to near zero. This not only streamlines the process but also ensures the recommendations generated are built upon a foundation of accurate and reliable information. Customer frustration has also been noticeably reduced.  The conversational approach fostered by the AI solution has led to an impressive 80% decrease in customer abandonment during onboarding. This surge in completion rates highlights the effectiveness of the new system. But perhaps the most compelling benefit lies in the expedited timeline. Clients can now expect to receive a draft recommendation and schedule an initial meeting on the very same day they submit their information. This remarkable acceleration empowers them to embark on their path towards financial security at a significantly faster pace. Conclusion Teracloud's successful use of cloud technology and generative AI has transformed the client's onboarding process. By streamlining data collection, using AI for recommendation generation, and facilitating collaboration between advisors and AI, Teracloud has empowered the client to deliver exceptional customer service and gain a significant competitive advantage. This project showcases the potential of AI to change the Fintech industry, paving the way for future innovations that help people manage their financial futures. Carlos Barroso Head of AI Teracloud

  • How to Configure Terraform with TFenv on Mac M1 Using Docker in 3 Easy Steps

    Intro Find the solution to Terraform compatibility conflicts on M1 architecture. This Teratip helps you to bypass the difficulties related to legacy Terraform vendor incompatibility on M1 architecture using Docker with Ubuntu Linux, so you can make your plans and apply them without relying on an external Linux environment. Step 1: Create your Dockerfile Use the following Dockerfile to create a Docker image that includes Ubuntu 20.04 and tfenv: Step 2: Build the Docker Image With the Dockerfile ready, proceed to build your Docker image, setting the version of Terraform that you need with the following command: docker build --build-arg TF_VERSION=0.15.4 -t maosella/tfenv:0.25 . Step 3: Run and Work in your Container To run the Docker container, first go to the root of your Terraform project and run it: docker run -it -v ${PWD}:/workspace -w /workspace maosella/tfenv:0.25 /bin/bash This command executes the container and mounts the current directory (${PWD}) in /workspace inside the container keeping the changes synchronized between them. The container by having a volume configured to the work repository, allows to work directly with our files in VSCode synchronizing with the files that we have in the directory workspace of our container. From the shell of the container we will be able to apply plan, apply and see our synchronized files without problems in the shell. With the -it flag you have an interactive shell to work with Terraform commands. Remember that you can change the version of Terraform with the command: tfenv install 1.0.0 tfenv use 1.0.0 Before executing Terraform commands, we will have to configure our environment variables with the Access Key and Secret Keys of our AWS user so that Terraform is authorized to enter our account. export AWS_ACCESS_KEY_ID="ASIAQATREYEPYOHALTB" export AWS_SECRET_ACCESS_KEY="l6YigMubZUu4fZdDFTQR/Xo4+Y9veTREFl17B/bA3" Security Considerations: It's essential that you handle your AWS keys with caution. Be sure not to expose your keys in scripts or Dockerfiles. Now we have everything in order to use Terraform normally. Integration with Visual Studio Code (VSCode) (optional) VSCode extension: "Remote - Containers". In VSCode, you can install the Remote - Containers extension to manage the container filesystem as if you were working locally and all from VSCode. To do this, install the VSCode "Remote - Containers" extension to work with Docker containers directly from VSCode. You can find the extension here: Remote - Containers. Final Thoughts Conclusion: With these three simple steps, you can have a fully functional Terraform environment on your Mac with M1, giving you the freedom and flexibility to work on your projects without restrictions, transforming the challenge of incompatibility into a productivity win with Docker🐳. Martin Osella Cloud Engineer Teracloud

  • The Rise of Generative AI Startups

    The field of artificial intelligence (AI) is evolving fast, and one of the hottest areas is generative AI. Generative AI uses machine learning to create entirely new content, from realistic images and videos to compelling marketing copy and even novel scientific discoveries. It has the potential to impact entire industries, which explains why companies are jumping into the generative AI space. As of February 2024, the generative AI market is experiencing a boom. Tech companies, both established and startups, are pouring resources into building generative AI tools and platforms. The competition is fierce. But ultimately, revving innovation and driving down costs can make generative AI solutions more accessible than ever before. This blog post explores generative AI, from open-source options to new startups. It covers market insights, uses, and the companies leading innovation in this field of generative AI, insights into the market, applications, and the companies driving innovation. Navigating the Generative AI Landscape Before delving into generative AI companies, it's crucial to understand industry trends, AI technology, and overall AI systems. Only then will so-called large language models (llms), foundational models, and open-source solutions make sense. Companies venturing into AI seek content generation features such as text and image generators, along with conversational AI like chatbots. These capabilities aim to enhance their overall offerings and unlock the potential to transform their services into generative AI platforms. Open Source and Collaboration The open-source nature of many generative AI solutions fosters collaboration within the tech community. Developers worldwide contribute to the improvement of AI models, ensuring a shared pool of knowledge and resources. This collaborative approach accelerates the progress of generative AI technology and provides for greater confidence in how these models work. Before thinking about product and service development, developers are knees-deep in the training process of deep learning models. As these models train off outsourced input data, their capabilities will fit increasingly into the context of businesses adopting them. Understanding Generative AI Companies Generative AI companies are revolutionizing various industries with their groundbreaking applications. These companies leverage machine learning techniques, specifically large language models, to create innovative solutions. The generative AI space has witnessed significant growth, with tech companies leading the charge. The Rise of Generative AI Startups As of November 2022, the generative AI startup scene is vibrant, with numerous innovative ventures entering the market. San Francisco, a renowned hub for tech companies, particularly those in the AI space, hosts a significant number of generative AI startups. These startups focus on creating generative AI platforms, open-source solutions, and novel applications. Exploring Generative AI Applications Generative AI application technologies aren’t limited to a specific niche. They cover a broad spectrum, from natural language processing to customer service enhancement, proving useful across diverse sectors. Machine learning algorithms, a core aspect of generative AI, contribute to developing sophisticated applications. One notable area where generative AI excels is in customer service. Companies are leveraging AI models to enhance communication, automate responses, and provide a seamless experience for users. The integration of generative AI tools ensures quick and accurate solutions, ultimately improving customer satisfaction. Think AI chatbots powered by natural language processing (NLP) that provide 24/7 customer support, answer complex questions and even personalize interactions. Here are a few more of generative AI’s most popular applications: Content Creation: Generative AI can help create marketing copy, social media posts, product descriptions, and even scripts for videos. Product Design: Generative AI tools can assist with product design by generating variations and optimizing for specific criteria. Material Science: Generative AI can analyze vast datasets of material properties and accelerate the discovery of new materials with specific functionalities. Companies to look out for Innovative companies crowd up all of the generative AI space. Here are a few to keep an eye on: OpenAI: A non-profit research company known for its large language model GPT-3, pushing the boundaries of what's possible with generative AI. Cohere: A San Francisco-based company offering a powerful generative AI platform for various applications. Hugging Face: An open-source platform providing access to pre-trained AI models and tools for building custom generative AI solutions. Amazon Web Services (AWS): A cloud computing giant offering generative AI tools and services as part of its AI and Machine Learning platform, Amazon SageMaker. These are just a few examples, and many other generative AI companies are making waves. The open-source movement is also playing a crucial role by making generative AI tools more accessible and fostering collaboration within the developer community. The Future of Generative AI The future of generative AI is incredibly bright. As machine learning and AI technology continue to advance, we can expect even more powerful and sophisticated generative AI tools to emerge. These will lead to even more innovative applications across diverse industries. However, ethical considerations surrounding AI bias and data privacy remain critical aspects to address as this technology develops. Final thoughts There are several ways to get started in generative AI if you're interested in exploring options for your business. Many generative AI companies offer free trials or limited-use plans. Additionally, there are numerous open-source generative AI tools available for developers. By experimenting with these tools and exploring the possibilities, you can discover how generative AI can benefit your organization. The generative AI market is dynamic and constantly evolving. By staying informed about the latest advancements and exploring the offerings of various generative AI companies, you can position yourself to leverage this powerful technology and unlock new opportunities for your business. Guido Casella Data Engineer Teracloud Ready to unlock the power of generative AI for your projects? Our cutting-edge AI services offer unparalleled creativity and efficiency. Take the next step towards revolutionizing your workflow and achieving your goals. Contact us now to explore how our generative AI services can elevate your endeavors today.

  • Velero for Disaster Recovery in EKS Cluster

    Introduction Velero is a robust tool for Kubernetes disaster recovery, enabling users to backup, migrate, and restore applications and persistent volumes. This section provides guidance on using Velero as a disaster recovery strategy within an Amazon EKS cluster. Objectives The primary objectives of implementing Velero for disaster recovery are as follows: Efficient Backup Strategies: Leverage Velero to create periodic backups of your EKS cluster resources, ensuring minimal data loss in case of a disaster. Automated Scheduling: Utilize Velero schedules to automate the backup process, reducing manual intervention and ensuring regular snapshots. Seamless Restore Operations: Develop clear restore strategies using Velero manifests, allowing for a quick and efficient recovery process. Considerations Backup Frequency: Determine an appropriate backup frequency based on the criticality of your applications and data. Retention Policies: Define retention policies for your backups to manage storage costs effectively. Backup and restore workflow Velero consists of two components: A Velero server pod that runs in your Amazon EKS cluster A command-line client (Velero CLI) that runs locally Whenever we issue a backup against an Amazon EKS cluster, Velero performs a backup of cluster resources in the following way: The Velero CLI makes a call to the Kubernetes API server to create a backup CRD object. The backup controller: Checks the scope of the backup CRD object, namely if we set filters. Queries the API server for the resources that need a backup. Compresses the retrieved Kubernetes objects into a .tar file and saves it in Amazon S3. Similarly, whenever we issue a restore operation: The Velero CLI makes a call to Kubernetes API server to create a restore CRD that will restore from an existing backup. The restore controller: Validates the restored CRD object. Makes a call to Amazon S3 to retrieve backup files. Initiates restore operation. Velero also performs backup and restore of any persistent volume in scope: If you are using Amazon Elastic Block Store (Amazon EBS), Velero will create Amazon EBS snapshots of persistent volumes in scope. For any other volume type (except hostPath), use Velero’s Restic integration to take file-level backups of the contents of your volumes. At the time of writing, Restic is in Beta, and therefore not recommended for production-grade backups. Steps 1. Velero Installation. You can easily follow the official guide to the complete Velero installation. This guide also outlines the creation of the necessary resources to set up before configuring Velero. https://velero.io/docs/v1.0.0/aws-config/ If you want, you can make this installation using helm too, which is another way you choose.  (https://github.com/vmware-tanzu/helm-charts/blob/main/charts/velero/values.yaml), Remember to create the AWS needed resources before this installation. 2. Check resources creation. After the successful installation and configuration, we can check the successful creation of all resources (IAM Role, S3 bucket) and the Velero pod running correctly. Below is a list with all the available verbs of Velero. 3. Schedule Backups. Create a Velero schedule manifest (schedule.yaml) to define the backup frequency and included namespaces. Example: 4. Restore from Backup. In the event of a disaster, use a Velero restore manifest (restore.yaml) to initiate the recovery process. Example: 5. Validation. Regularly validate your disaster recovery strategy by simulating restore operations in a non-production environment. Martín Carletti Cloud Engineer Teracloud Fabricio Blas Cloud Engineer Teracloud

  • Discover the untapped power of Generative AI Cloud with AWS

    Unlocking Creative Potential with Generative AI in the Cloud In today's rapidly evolving digital landscape, creativity thrives as a driving force behind innovation. Thanks to advancements in artificial intelligence (AI), particularly generative AI, we witness a profound transformation in how we approach creative endeavors. At the forefront of this revolution stands Amazon Web Services (AWS), offering a comprehensive suite of AI-powered services that revolutionize how we think about and harness creativity in the cloud. Generative AI: A Gateway to Boundless Creativity Recent years have seen remarkable advancements in the field of AI, particularly in generative AI, where machines are trained to create content, images, and even entire virtual environments. Amazon Web Services (AWS) has emerged as a frontrunner, spearheading the future of generative AI within the cloud environment. With its suite of innovative services like AWS Bedrock, AWS SageMaker, and Amazon Q, AWS empowers businesses to harness the power of generative AI to create proprietary AI models tailored to their unique needs such as large language models. AWS Bedrock: Building and Scaling Generative AI Applications with Foundation Models At the core of AWS's AI ecosystem lies AWS Bedrock, a foundational model that serves as the backbone for cutting-edge AI development. This powerful tool offers unparalleled advantages for creativity by providing a stable and reliable infrastructure for deploying and scaling AI solutions. With AWS Bedrock, developers and organizations can leverage the power of Generative AI with confidence, knowing they are built on a robust and secure foundation. These robust foundations enable customers to focus more on innovation and less on infrastructure management, accelerating the pace of AI-driven creativity. Additionally, AWS Bedrock fosters collaboration and interoperability across its AI-powered services, allowing users to seamlessly integrate AI capabilities into their workflows and pave the way for business experimentation. Amazon SageMaker: Democratizing AI Development Central to AWS's AI offerings lies Amazon SageMaker, a fully managed service that simplifies the process of building, training, and deploying machine learning algorithms at scale.  With SageMaker, users can access a wide range of algorithms and frameworks, enabling them to experiment with generative AI capabilities without the need for specialized expertise. This democratization of AI development empowers individuals and organizations to tap into their creative potential and experiment with their inputted data. Beyond Code: Empowering Creativity with Generative AI Tools Amazon CodeWhisperer revolutionizes the coding experience by offering intelligent code generation capabilities. During a preview period, participants using CodeWhisperer experienced a 27% increase in task completion rates and completed tasks 57% faster than those without it, highlighting its potential to revolutionize coding workflows. Further expanding the boundaries of creativity, Amazon Q in QuickSight offers a transformative approach to both visualize and analyze data. By combining natural-language querying with generative BI authoring capabilities, analysts can create customizable visuals and refine queries effortlessly. This empowers businesses to make data-driven decisions with clarity and precision, fueling creativity in strategic planning and execution. Healthcare Transformed: Revolutionizing Documentation with AWS HealthScribe AWS HealthScribe, a HIPAA-eligible service, empowers healthcare software vendors to automate clinical documentation processes. By combining speech recognition and generative AI, HealthScribe analyzes patient-clinician conversations to generate accurate and easily reviewable clinical notes, reducing the burden on healthcare professionals and enhancing patient care. Final Thoughts: Unleashing Limitless Possibilities with Generative AI The convergence of Generative AI and cloud computing, spearheaded by Amazon Web Services (AWS), is revolutionizing creativity across diverse domains. AWS's suite of innovative AI services enables customers to leverage generative AI and its applications, democratizing AI development, enhancing developer productivity, redefining business intelligence, and revolutionizing healthcare documentation. All in all, AWS's robust foundation empowers individuals and organizations to unleash their creative potential. As we continue to harness the power of Generative AI in the cloud, the possibilities for innovation and creativity are truly limitless. Ready to unlock the power of generative AI for your projects? Our cutting-edge AI services offer unparalleled creativity and efficiency. Take the next step towards revolutionizing your workflow and achieving your goals. Contact us now to explore how our generative AI services can elevate your endeavors today. Alan Bilsky Data Engineer Teracloud

  • How to Enable DNSSEC in your domains

    The Domain Name System Security Extensions (DNSSEC) is a set of specifications that extend the DNS protocol by adding cryptographic authentication for responses received from authoritative DNS servers. Its goal is to defend against techniques that hackers use to direct computers to rogue websites and servers. DNSSEC adds two important features to the DNS protocol: Data origin authentication allows a resolver to cryptographically verify that the data it received came from the zone where it believes the data originated. Data integrity protection lets the resolver know that the data hasn't been modified in transit since it was originally signed by the zone owner with the zone's private key. How do DNS resolvers know how to trust in the DNSSEC keys? A zone's public key is signed, just like the other data in the zone. However, the public key is not signed by the zone's private key, but by the parent zone's private key. Every zone's public key is signed by its  parent zone, except for the root zone: it has no parent to sign its key. Therefore, the root zone's public key is an important starting point for validating DNS data. If a resolver trusts the root zone's public key, it can trust the public keys of top-level zones signed by the root's private key, such as the public key for the org zone. And because the resolver can trust the public key for the org zone, it can trust public keys signed by their respective private key, such as the public key for icann.org. (In actual practice, the parent zone doesn't sign the child zone's key directly--the actual mechanism is more complicated--but the effect is the same as if the parent had signed the child's key.) The sequence of cryptographic key signing is called a chain of trust. How much does it cost to enable DNSSEC in AWS? Amazon Route 53 does not charge you to enable DNSSEC signing on your public hosted zones or to enable DNSSEC validation for Amazon Route 53 Resolver. However, when you enable DNSSEC signing on your public hosted zones, you incur AWS Key Management Service (KMS) charges for storing the private key and using the instances of the key to sign your zones. For more information about KMS charges, see the AWS KMS pricing page. Note that you can choose to use a single customer-managed AWS KMS key that is stored in KMS across multiple public hosted zones. How do we enable DNSSEC? Let’s consider we have a root zone in AWS, where we host all our domains, but it still depends on GoDaddy, for example, how could we enable DNSSEC in this case? First of all, we need to take some considerations: DNS propagation can take anywhere from a few minutes to 24 hours, depending on the geographical location of the user, the type of DNS record being updated, and the TTL (time to live) value set for the record. During this time, the updated DNS information may not be available to all users and systems immediately. Pre-requisites To configure DNSSEC for a domain, your domain, and DNS service provider must meet the following prerequisites: The registry for the top-level domain (TLD) must support DNSSEC. To determine whether the registry for your TLD supports DNSSEC, see Domains that you can register with Amazon Route 53. The DNS service provider for the domain must support DNSSEC. You must configure DNSSEC with the DNS service provider for your domain before you add public keys for the domain to Route 53. The number of public keys that you can add to a domain depends on the TLD for the domain: .com and .net domains – up to thirteen keys All other domains – up to four keys Before start recommendations Lowering the zone's maximum TTL will help reduce the wait time between enabling signing and the insertion of the Delegation Signer (DS) record. Lowering the zone's maximum TTL to 1 hour (3600 seconds) allows us to roll back after only an hour if any resolver has problems caching signed records. Lower the SOA TTL and SOA minimum field. The SOA minimum field is the last field in the SOA record data. The SOA TTL and SOA minimum field determines how long resolvers remember negative answers. After you enable signing, Route 53 name servers start returning NSEC records for negative answers. The NSEC contains information that resolvers might use to synthesize a negative answer. If you have to roll back because the NSEC information caused a resolver to assume a negative answer for a name, then you only have to wait for the maximum of the SOA TTL and SOA minimum field for the resolver to stop the assumption. Make sure the TTL and SOA minimum field changes are effective.Use GetChange to ensure that your changes have been propagated to all Route 53 DNS servers. Enabling DNSSEC signing at Route 53 Click on Enable DNSSEC signing at the DNSSEC signing tab, in the hosted zone console. Choose to create a customer-managed CMK Create KSK and enable signing After enabling DNSSEC, click on View Information to Create a DS Record. Check on Establish a chain of trust -> Another Domain Registrar section. Go Daddy configuration steps Go to Domain Portfolio -> Domain Settings for your domain and select DNSSEC. Create a new DS record with the following information: Key Tag: Key Tag in AWS Algorithm: Signing Algorithm Type in AWS Digest Type: Digest Algorithm Type in AWS Digest: Digest in AWS Testing To check if the new configuration is properly set up and the DNS is answering as expected: dig dnskey +dnssec We should receive two DNSKEYs (one for ZSK and another for KSK) and a signed resource record (RRSIG), confirming that the DNS servers are successfully using DNSSEC. To check the chain of trust with the TLD: dig com NS +short The answer should retrieve the TLD server name dig DS +short To make sure we get the DS record for the journeytrack domain from TLD. You should get the DS record shown in the DNSSEC recommendations to create the record. dig A +dnssec To check if the resource record is set with signatures. Answers must return A and RRSIG info. dig DNSKEY journeytrack.io +short To validate the DS public key Rollback If any problem or issue arises during the implementation, DNSSEC can be easily reverted: Disable DNSSEC from go Daddy and Route53 Restore SOA changes Undo NS TTL changes Lourdes Dorado Cloud Engineer Teracloud

  • How to configure ArgoCD OIDC with Google Workspace in 5 simple steps

    There are different ways to handle authentication in ArgoCD, but indeed using the admin password is not secure enough. For this reason, we’ll learn how to configure your ArgoCD to integrate with Google Workspace for Login. In this TeraTip we’ll cover one of the approaches for authentication, using ‌groups from Google Workspace. Before you get started… In order to get the SSO working you need to have the SSL and URL for your server already configured, otherwise, you’ll get errors during the authentication. Step # 1: Create the OAuth Screen First, you create a project with any name you want and configure the OAuth screen as follows: In the Authorized Domains section, it is important to configure the domain for the email your users have, in this case, we add the domain for our organization. Finally, on the Scopes tab select the userinfo.profile and the openid scopes. Those are the scopes ArgoCD needs for the log in. Step # 2: Create the OAuth Client ID On the Credentials tab, click on + Create Credentials and OAuth client ID. Then select on Application type, Web Application, and configure the JavaScript origins and redirect URIs. In the Authorized JavaScripts origins section, configure the root URL for your ArgoCD. Then in Authorized redirect URIs copy this URL but append the /api/dex/callback path. Then click on Create and save your Client ID and Client Secret for later. Step # 3: Configure the Service Account on Google Workspace Now create the Service Account and configure the Domain Wide delegation, to make ArgoCD able to read the groups. On the Service Account section of the Google Console, we click on + CREATE SERVICE ACCOUNT. You only need to enter a name for the service account and enter any name you like. Enter ‌your service account, go to the Keys tab, click on Add Key, and select JSON as format. Save the keys, we will use them later for configuring the OIDC. Step # 4: Set up Domain Wide delegation and enable Admin SDK To close with the Google configuration you’ll now have to configure Domain Wide delegation and enable the Admin SDK. First head to the Google Cloud Admin console, and then go to Security, Access and data control, API controls, and, lastly, then click on manage domain-wide delegation. Click on Add Client, and then on Client ID paste the Client ID of your service account, and on the scopes section paste this: https://www.googleapis.com/auth/admin.directory.group.readonly Finally, head to https://console.cloud.google.com/apis/library/admin.googleapis.com and enable the Admin SDK for your project. Step # 5: Configure ArgoCD To configure the OIDC create two secrets on your cluster, one for the Client Secret we got on Step 2 and one for the JSON we got on Step 3. For the client secret: For the JSON: Now if you are using the ArgoCD Helm Chart, you can use the following values, tested on version 5.27.1: Now you have your ArgoCD configured with Google SSO! Juan Wiggenhauser Cloud Engineer Teracloud

  • Security announcements at AWS Re: Invent 2023

    AWS re:Invent is AWS’s end-of-the-year event where the latest developments of AWS Cloud microservices are announced. Our team had the pleasure of attending talks with the most important announcements for what’s next in Cloud Security and the following is their shortlist. Access analyzer 1) Custom policy checks powered by automated reasoning. Custom policy checks to validate that IAM policies adhere to your security standards ahead of deployments. It uses the power of automated reasoning—security assurance backed by mathematic proof-. To detect nonconformant updates to policies Easy to integrate into CI/CD pipelines 2) Simplified inspecting unused access to guide you toward the least privilege. IAM Access Analyzer continuously analyzes your accounts to identify unused access and creates a centralized dashboard with findings. The findings highlight unused roles, unused access keys for IAM users, and unused passwords for IAM users. The findings provide visibility into unused services and actions for active IAM roles and users. Security Hub 1) Customized security controls. Security teams can now refine the best practices monitored by Security Hub controls to meet more specific security expectations,  with your specific password policies, retention frequencies, or other attributes. 2) Major dashboard enhancements. New data visualizations, filtering, and customization enhancements. You can now filter and customize your dashboard views, as well as view a new set of widgets that were carefully chosen to reflect the modern cloud security threat landscape and relate to potential threats and vulnerabilities in your AWS cloud environment. The new filtering functionality allows you to filter the Security Hub dashboard by account name and ID, resource tag, and product name, such as Amazon GuardDuty or Amazon Inspector, Region, severity, and application. You can also choose which widgets will appear in the dashboard, and customize their position and size. 3) Findings enrichment. Metadata enrichment for findings aggregated in AWS Security Hub allows you to contextualize better, prioritize, and take action on your security findings. This enrichment adds resource tags, a new AWS application tag, and account name information to every finding ingested into Security Hub, including findings from AWS security services such as Amazon GuardDuty, Amazon Inspector, and AWS IAM Access Analyzer, as well as a large and growing list of AWS Partner Network (APN) solutions. Eliminates the need to build data enrichment pipelines or manually enrich metadata of security findings. It also makes it easier to fine-tune findings for automation rules, search or filter findings and insights, and assess security posture status by application in Security Hub widgets, and in related AWS applications. 4) New central configuration capabilities. Centrally enable and configure Security Hub standards and controls across accounts and Regions in just a few steps. Use the Security Hub central configuration to address gaps in your security coverage by creating security policies with your desired standards and controls and applying them in selected Regions across accounts and Organizational Units (OUs). Set the Security Hub delegated administrator (DA) for all Regions at once, and then view and configure the cloud security posture management capabilities, such as standards and controls, for all or some accounts globally, without needing to update them account-by-account and Region-by-Region. Secret Manager 1) Supports batch retrieval of secrets. A single API call to identify and retrieve a group of secrets for your application. With the BatchGetSecretValue, you can input a list of secret names, ARNs, or filter criteria, such as tags. The API returns a response for all secrets meeting the criteria in the same format as the existing GetSecretValue API. This allows you to optimize your workloads while reducing the number of API calls. Amazon Detective 1) Supports security investigations for Amazon GuardDuty ECS. 2) Runtime Monitoring. Enhanced visualizations and additional context for detections on ECS. Use the new runtime threat detections from GuardDuty and the investigative capabilities from Detective to improve your detection and response for potential threats to your container workloads. 3) Log retrieval from Amazon Security Lake. Integrates with Amazon Security Lake, enabling security analysts to query and retrieve logs stored in Security Lake. To get additional information from AWS CloudTrail logs and Amazon Virtual Private Cloud (Amazon VPC) Flow Logs stored in Security Lake while conducting security investigations in Detective. 4) Investigations for IAM. Automatically investigates AWS Identity and Access Management (IAM) entities for indicators of compromise (IoC). It helps security analysts determine whether IAM entities have potentially been compromised or involved in any known tactics, techniques, and procedures (TTP) from the MITRE ATT&CK framework. There is no additional charge for this new capability, and it’s available for all existing and new Detective customers. Amazon GuardDuty 1) Runtime monitoring for Amazon EC2. It gives you visibility into on-host, and operating system–level activities and provides container-level context into detected threats. Compatible with AWS Organizations 2) ECS Runtime Monitoring, including AWS Fargate. Expansion of Amazon GuardDuty that introduces runtime threat detection for Amazon Elastic Container Service (Amazon ECS) workloads—including serverless container workloads running on AWS Fargate. It gives you visibility into on-host and operating system-level activities. It provides container-level context into detected threats, such as containers repurposed for cryptocurrency mining or unusual activity indicating unauthorized code execution on your container. AWS Analytics 1) Simplified users’ data access across services with the IAM Identity Center. Use trusted identity propagation with AWS IAM Identity Center to manage and audit access to data and resources based on user identity. Available to customers accessing AWS data sources through Amazon Quicksight, EMR Studio, and Redshift Query Editor; supported third-party tools and applications; and S3 Access Grants. In big data environments managed by Amazon EMR, trusted identity propagation is available for EMR on EC2. It interacts with authorization engines, including Amazon Redshift, Lake Formation, and S3 Access Grants, and propagates the user’s identity to the data source, Amazon Redshift or S3. Amazon Inspector 1) Agentless vulnerability assessments for Amazon EC2 in preview. Continuous monitoring of your Amazon EC2 instances for software vulnerabilities without installing an agent or additional software. You can expand your vulnerability assessment coverage across your EC2 infrastructure with Amazon Inspector agentless scanning for EC2 instances that do not have SSM Agents installed or configured. Amazon Inspector takes snapshots of EBS volumes to collect software application inventory from the instances to perform vulnerability assessments. 2) Request a Cyber Insurance Quote from an AWS Cyber Insurance Competency Partner. Customers can receive cyber insurance pricing estimates, purchase plans, and be confident they have the coverage for security and recovery services when needed most. Customers leverage an AWS Security Hub assessment scanning against the AWS Foundational Best Practices Framework and deliver the assessment results to insurance providers. Customers with a security posture that follows AWS best practices achieve rewards similar to “safe-driver” discounts. 3) AWS Built-in Competency Partner software automates Installation for customers. AWS Built-in software uses a well-architected Modular Code Repository (MCR) designed to add value to partner software solutions. Building blocks called Cloud Foundational Services across multiple domains such as identity, security, and operations. Final thoughts AWS re:Invent 2023 has not only redefined the benchmarks for cloud security but has also set a new standard for collaboration between cloud providers, security solutions, and insurance services. These advancements collectively contribute to fostering a more secure, efficient, and responsive cloud computing landscape. Lourdes Dorado Cloud Engineer Teracloud

  • Monitoring Updates at AWS Re:Invent 2023

    Welcome to our recap of the exciting monitoring announcements made during the AWS Re:Invent 2023 event in Las Vegas! 1. Natural Language Query in Amazon CloudWatch In an exciting advancement, AWS has introduced a natural language query feature for Amazon CloudWatch. Now you can make more intuitive and expressive queries across logs and metrics. This makes it easier to extract valuable information from your logs and metrics. https://aws.amazon.com/blogs/aws/use-natural-language-to-query-amazon-cloudwatch-logs-and-metrics-preview/ 2. Amazon Managed Service for Prometheus Collector The new feature "Amazon Managed Service for Prometheus Collector" is here to simplify metric collection in Amazon EKS environments. The highlight is metric collection without the need for additional agents. Interested in simpler management of your metrics in EKS? This is a must-read. https://aws.amazon.com/blogs/aws/amazon-managed-service-for-prometheus-collector-provides-agentless-metric-collection-for-amazon-eks/ 3. Metric Consolidation with Amazon CloudWatch In an effort to address hybrid and multicloud challenges, AWS has introduced a new capability for Amazon CloudWatch. You can now consolidate your metrics from hybrid, multicloud, and on-premises environments in one place. This provides a more comprehensive view and makes resource management easier. https://aws.amazon.com/blogs/aws/new-use-amazon-cloudwatch-to-consolidate-hybrid-multi-cloud-and-on-premises-metrics/ Conclusion These advancements enhance user experience, simplify operations, and offer a consolidated perspective across diverse cloud setups. Exciting times lie ahead in the landscape of AWS monitoring! Martín Carletti Cloud Engineer Teracloud

  • What C levels must know about their IT in the age of AI

    A recent comprehensive survey by Cisco underscores a critical insight: the majority of businesses are racing against time to deploy AI technologies, yet they confront significant gaps in readiness across key areas. This analysis, drawn from over 8,000 global companies, reveals an urgent need for enhanced AI integration strategies. See the original survey at Cisco global AI readiness survey, but if you want to know how to apply this information in your business today, keep reading. Key Findings Practical Steps for AI Integration Final Thoughts Key Findings - 97% of businesses acknowledged increased urgency to deploy AI technologies in the past six months. - Strategic time pressure: 61% believe they have a year at most to execute their AI strategy to avoid negative business impacts. - Readiness gaps in strategy, infrastructure, data, governance, talent, and culture, with 86% of companies not fully prepared for AI integration. The report highlights an AI Readiness Spectrum to categorize organizations: 1. Pacesetters: Leaders in AI readiness 2. Chasers: Moderately prepared 3. Followers: Limited preparedness 4. Laggards: Significantly unprepared This classification mirrors our approach at Teracloud using the Datera Data Maturity Model (D2M2) which we use to guide our customers towards data maturity and AI readiness. Practical Steps for AI Integration Let’s explore some recommendations that will help prepare your organization for the AI era. Develop a Robust Strategy - Prioritize AI in your business operations. The urgency is evident, with a substantial majority of businesses feeling the pressure to adopt AI technologies swiftly. - Create a multi-faceted strategy that addresses all key pillars simultaneously. You can use our D2M2 framework and cover all your bases. Alternatively, you can base your strategy on the generic AWS Well-Architected Framework Ensure Data Readiness - Recognize the critical role of 'AI-ready' data. Data serves as the AI backbone, yet it’s often the weakest link, not because we don't have data but because it isn’t accessible. - Tackle data centralization issues to leverage AI's full potential. Using cloud tools you can still have the information scattered. Consume it using a single endpoint, for instance using Amazon Athena and other data-at-scale tools. - Facilitate seamless data integration across multiple sources. Employing tools like AWS Glue can help in automating the extraction, transformation, and loading (ETL) processes, making diverse data sets more cohesive and AI-ready. Upgrade Infrastructure and Networking - To accommodate AI's increased power and computing demands, over two-thirds (79 percent) of companies will require further data center graphics processing units (GPUs) to support current and future AI workloads. - AI systems require large amounts of data. Efficient and scalable data storage solutions, along with robust data management practices, are essential. - Fast and reliable networking is necessary to support the large-scale transfer of data and the intensive communication needs of AI systems. - Enhance IT infrastructure to support increasing AI workloads. - Focus on network adaptability and performance to meet future AI demands. Implement Robust Governance and security - Develop comprehensive AI policies, considering data privacy, sovereignty, bias, fairness, and transparency. - AI-related regulations are evolving. A flexible governance strategy allows the organization to quickly adapt to new laws and standards. - A solid governance framework is necessary to ensure AI is used ethically and responsibly, adhering to ethical guidelines and standards. - Prioritize data security and privacy. Utilize AWS’s comprehensive security tools like AWS Identity and Access Management (IAM) and Amazon Cognito to safeguard sensitive data, a crucial aspect when deploying AI applications. Focus on Talent Development - Address the digital divide in AI skills. While most companies plan to invest in upskilling, there's skepticism about the availability of talent. - Emphasize continuous learning and skill development. Cultivate a Data-Centric Culture - Embrace a culture that values and understands the importance of data for AI applications. - Address data fragmentation: Over 80% of organizations face challenges with siloed data, a major impediment to AI effectiveness. Understanding these findings is just the first step. Implementing them requires a strategic approach, one that we champion through our Datera Data Maturity Model (D2M2). Our model not only aligns with Cisco's categorizations but also offers a roadmap for businesses to evolve from AI Followers to Pace setters. For a deeper dive into the Cisco survey, access the full report: Cisco Global AI Readiness Survey. To know more about how Teracloud helps its customers enter the Generative AI era, please contact us. Final Thoughts Adopting AI is no longer optional but a necessity for competitive advantage. By focusing on the six pillars of AI readiness, companies can transform challenges into opportunities, steering towards a future where AI is not just an ambition but a tangible asset driving business success. Carlos José Barroso Head of DataOps Teracloud To learn more about cloud computing, visit our blog for first-hand insights from our team. If you need an AWS-certified team to deploy, scale, or provision your IT resources to the cloud seamlessly, send us a message here.

  • Get your first job in IT with AWS Certifications

    Could you land your first job with just AWS certifications and no experience at all? Almost… but not exactly. The following explores how helpful an AWS Certification is when landing your first job in IT, and why it’s so important not to fall for the “only certifications will guarantee you a job” trap. An AWS certification is a professional credential offered by Amazon Web Services (AWS) that validates an individual's knowledge and expertise in various AWS cloud computing services and technologies. These certifications are designed to demonstrate a person's proficiency in using AWS services and solutions to design, deploy, and manage cloud-based applications and infrastructure. It's proof that you know how to use Amazon Web Services and understand cloud concepts. That said, one could deduce that obtaining these certifications is a really good way to demonstrate your knowledge and stand out among your peers. But is that all? AWS Partners would disagree. What are AWS Partners? AWS partners are organizations that collaborate with AWS to offer a wide range of services, solutions, and expertise related to AWS cloud computing. AWS partners come in various forms and play critical roles in helping businesses leverage AWS services to meet their unique needs. In other words, partners are companies that help AWS implement their services. You have different partner tiers: AWS Select Tier Services Partners AWS Advanced Tier Services Partners AWS Premier Tier Services Partners The equation is really simple: The more qualified you are, the more clients you get. The more clients you get, the more money the company makes. Therefore it’s in an AWS Partner's best interest to become more specialized, and that's where certifications come into play. To become a specialized partner among other things, you need technical certified individuals. As you can see, being an AWS Premier Partner, companies require 25 individuals be certified. And that’s why having a certification becomes really valuable, even more if it’s a Professional or Specialty one. Other Benefits There are even badges for how many certifications a partner has, which give more credibility to the provided service. There are other benefits as a partner such as being eligible to earn credits for the client. That means, receiving hundreds or even thousands in financing through credits for you to offer your clients. Final thoughts To sum up, if you don’t have any experience at all, having an AWS Certification will really help you to obtain interviews and if you combine the knowledge acquired with real case scenarios you’ll be closer to landing your dream job. If on the other hand, you only obtain the certification yet don’t have any practical abilities or fieldwork, the certificate won’t really help at all. Strategize. Find companies that are AWS Partners and apply to them. They’re looking for technical individuals and you’re looking for real case scenarios. It’s in real-life Cloud challenges where you actually get to apply your knowledge and ultimately gain the confidence and proof you’ll need to continue developing your professional skills. Ignacio Bergantiños Cloud Engineer Teracloud If you want to know more about AWS, we suggest going check How to apply for Amazon's Service Delivery Program (SDP)

bottom of page