In this article we will broadly explain the topics touched in the webinar held just over a week ago in which we talked about Infrastructure as Code with Terraform.
What is the target that a company usually sets itself when it wants to make the most of cloud environments?
For Criticalcase it is to achieve the so-called DevOps on the hybrid multicloud, that is to be able to reach a level of automation in various areas involving the internal procedures of the company and the methodologies, in order to change the paradigm and make the most of multicloud environments.
Criticalcase’s approach is a hybrid approach, and it is completely agnostic on any cloud platform.
Starting from Infrastructure as Code, in our opinion, should be the first step that a company must face, as it is the first step in the creation of platforms.
It is a process that allows the management and provisioning of infrastructure components both on premise at proprietary or third-party data centers and on the Cloud.
It is accomplished through definition files that are readable by a program such as Terraform.
Infrastructure as a code has multiple advantages, among the most important:
Terraform is an opensource project by HashiCorp able to manage any available Cloud platform
Having a very large ecosystem, it is easy to approach the cloud directly through Terraform, as there are about 150 official or verified Providers and over 800 communities.
Unlike other IAC (Infrastructure as Code) systems that are born with an imperative approach, Terraform is an important tool born with a declarative approach that allows you to represent infrastructural objects.
The “final state” of the infrastructure is declared from the beginning, meaning that it is well known what it must look like.
Therefore, it will be Terraform itself and its plugins and suppliers, who will take care of the implementation of the infrastructure initially designed.
The new version 1.0 of Terraform has been out for a few weeks now, introducing several features such as:
This is a list of best practices that derive from the experience Criticalcase has had with the use of Terraform
We at Criticalcase have developed many modules for each type of AWS component that we are going to use when needed.
The first point to consider when scaling the system is the centralization of the state management and its lock (if used by multiple points).
Data sources allow Terraform to use information defined outside Terraform, it can be defined by another separate Terraform configuration or by different functions.
Each provider can offer data sources along with its own set of risk types.
In this case, a resource already present on Terraform will not be used but a program will be run by the developer code.
Even if the owner of Terraform (HasciCorp) advises not to use the Provisioners unless you can really cannot do without them, with Terraform it is not possible to manage everything and therefore you will have to intervene on the remote machines (if it is a Server), or perform local actions.
There are 3 types, two of which are the ssh connection to the servers:
Some types of resources include nested blocks that can be iterated N times over “settings”, which typically represent separate related (or embedded) objects within the container object.
In addition to all the best practices, there are also warnings that can be summarized in 3 points:
If you are interested in learning more about the subject, below you will find the video of the webinar held by our Delivery Manager and Cloud Architect Pasquale Lepera
If you want to learn more about the webinar, or if you want to receive the slides that have been projected, contact us using the following form.
Compila il form e un nostro esperto ti ricontatterà entro 24 ore: non vediamo l’ora di conoscerti!
Iscriviti alla nostra newsletter per restare aggiornato sulle novità dell’universo Criticalcase