Skip to content

Commit 5857f97

Browse files
Jaime Salas ZancadaJaime Salas Zancada
authored andcommitted
terraform review done
1 parent 693152b commit 5857f97

8 files changed

Lines changed: 132 additions & 64 deletions

File tree

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
2+
// README at: https://github.com/devcontainers/templates/tree/main/src/ubuntu
3+
{
4+
"name": "Ubuntu",
5+
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
6+
"image": "mcr.microsoft.com/devcontainers/base:jammy",
7+
"features": {
8+
"ghcr.io/devcontainers/features/aws-cli:1": {},
9+
"ghcr.io/devcontainers/features/azure-cli:1": {},
10+
"ghcr.io/devcontainers/features/terraform:1": {}
11+
},
12+
13+
// Features to add to the dev container. More info: https://containers.dev/features.
14+
// "features": {},
15+
16+
// Use 'forwardPorts' to make a list of ports inside the container available locally.
17+
// "forwardPorts": [],
18+
19+
// Use 'postCreateCommand' to run commands after the container is created.
20+
// "postCreateCommand": "uname -a",
21+
22+
// Configure tool-specific properties.
23+
// "customizations": {},
24+
25+
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
26+
"remoteUser": "root"
27+
}

05-iac/00-terraform/03-desplegando-config-base/readme.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ aws_instance.web_server.name
4141

4242
Abrimos `./.start-app/main.tf`
4343

44-
```tf
44+
```ini
4545
provider "aws" {
4646
access_key = "ACCESS_KEY"
4747
secret_key = "SECRET_KEY"
@@ -51,7 +51,7 @@ provider "aws" {
5151

5252
Este bloque le indica a Terraform que usaremos `AWS` como **provider**.
5353

54-
```tf
54+
```ini
5555
data "aws_ssm_parameter" "ami" {
5656
name = "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2"
5757
}
@@ -63,7 +63,7 @@ Este es un **service manager parameter**, al cual le damos como nombre de etique
6363

6464
En la sección de `NETWORKING` creamos la `VPC`
6565

66-
```tf
66+
```ini
6767
resource "aws_vpc" "vpc" {
6868
cidr_block = "10.0.0.0/16"
6969
enable_dns_hostnames = "true"
@@ -73,7 +73,7 @@ resource "aws_vpc" "vpc" {
7373

7474
Después creamos la `internet gateway`, y lo asociamos con la VPC que creamos previamente. Para tal fin usamos `vpc_id = aws_vpc.vpc.id`
7575

76-
```tf
76+
```ini
7777
resource "aws_internet_gateway" "igw" {
7878
vpc_id = aws_vpc.vpc.id
7979

@@ -84,7 +84,7 @@ resource "aws_internet_gateway" "igw" {
8484
8585
Creamos una `subnet` asociada a la `VPC`. Gracias a esta entrada `map_public_ip_on_launch = "true"`, obtenemos una IP pública
8686

87-
```tf
87+
```ini
8888
resource "aws_subnet" "subnet1" {
8989
cidr_block = "10.0.0.0/24"
9090
vpc_id = aws_vpc.vpc.id
@@ -94,7 +94,7 @@ resource "aws_subnet" "subnet1" {
9494

9595
Creamos una `route table`, y la asociamos a nuestra `VPC`. Para ver la documentación oficial de este recurso, seguir el siguiente enlace [Route tables official Docs](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html)
9696

97-
```tf
97+
```ini
9898
resource "aws_route_table" "rtb" {
9999
vpc_id = aws_vpc.vpc.id
100100

@@ -109,7 +109,7 @@ En el bloque anidado, podemos especificar un `route` para añadir a la `route ta
109109

110110
Por último asociamos nuestra `route table` con una única `subnet`
111111

112-
```tf
112+
```ini
113113
resource "aws_route_table_association" "rta-subnet1" {
114114
subnet_id = aws_subnet.subnet1.id
115115
route_table_id = aws_route_table.rtb.id
@@ -118,7 +118,7 @@ resource "aws_route_table_association" "rta-subnet1" {
118118

119119
Creamos un `security group` que permita al puerto 80 de cualquier dirección hablar nuestra instancia `EC2`
120120

121-
```tf
121+
```ini
122122
# Nginx security group
123123
resource "aws_security_group" "nginx-sg" {
124124
name = "nginx_sg"
@@ -146,7 +146,7 @@ Estamos asocaindo este `security group` con nuestra `VPC`, y estamos creando un
146146

147147
Por último tenemos la instancia EC2.
148148

149-
```tf
149+
```ini
150150
resource "aws_instance" "nginx1" {
151151
ami = nonsensitive(data.aws_ssm_parameter.ami.value)
152152
instance_type = "t2.micro"

05-iac/00-terraform/04-usando-inputs-outputs/02-demo.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -245,7 +245,8 @@ resource "aws_security_group" "nginx-sg" {
245245
}
246246
# ....
247247
```
248-
mapas entradas en `variables.tf` para las instancias
248+
249+
Creamos entradas en `variables.tf` para las instancias
249250

250251
```diff
251252

05-iac/00-terraform/05-incorporando-recursos/11-demo.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -162,6 +162,10 @@ resource "aws_lb_target_group" "nginx" {
162162
163163
```
164164
165+
```bash
166+
terraform validate
167+
```
168+
165169
### Paso 2. Crear el plan
166170
167171
Ahora estamos listos, para generar el `plan`, pero antes registrar las credenciales, si no están registradas en la terminal:

05-iac/00-terraform/06-incorporando-nuevos-providers/12-demo.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,12 +24,12 @@ Aquí podemos encontrar un [ejemplo de uso del provider](https://registry.terraf
2424

2525
Creamos el fichero `./lab/lc_web_app/providers.tf`
2626

27-
```tf
27+
```ini
2828
terraform {
2929
required_providers {
3030
aws = {
3131
source = "hashicorp/aws"
32-
version = "~> 3.0"
32+
version = "~> 5.0"
3333
}
3434
}
3535
}
@@ -82,7 +82,7 @@ terraform {
8282
required_providers {
8383
aws = {
8484
source = "hashicorp/aws"
85-
version = "~> 3.0"
85+
version = "~> 5.0"
8686
}
8787
}
8888
}

05-iac/00-terraform/06-incorporando-nuevos-providers/13-demo.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ terraform {
2929
required_providers {
3030
aws = {
3131
source = "hashicorp/aws"
32-
version = "~> 3.0"
32+
version = "~> 5.0"
3333
}
3434
+
3535
+ random = {

05-iac/00-terraform/06-incorporando-nuevos-providers/14-demo.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,10 @@ Crear `lab/lc_web_app/s3.tf`
1919
```tf
2020
# aws_s3_bucket
2121
22-
# aws_s3_bucket_acl
23-
2422
# aws_s3_bucket_policy
2523
24+
# aws_s3_object
25+
2626
# aws_iam_role
2727
2828
# aws_iam_role_policy

05-iac/00-terraform/06-incorporando-nuevos-providers/15-demo.md

Lines changed: 84 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -58,74 +58,102 @@ Ya estamos listos para generar la `Policy`, hacemos click en `Generate Policy`,
5858

5959
```json
6060
{
61-
"Id": "Policy1671300749955",
61+
"Id": "Policy1704786378589",
6262
"Version": "2012-10-17",
6363
"Statement": [
6464
{
65-
"Sid": "Stmt1671300748312",
65+
"Sid": "Stmt1704786159953",
6666
"Action": [
6767
"s3:PutObject"
6868
],
6969
"Effect": "Allow",
7070
"Resource": "arn:aws:s3:::${BucketName}/${KeyName}",
7171
"Principal": {
7272
"AWS": [
73-
"elb"
73+
"AWS"
7474
]
7575
}
76-
}
77-
]
78-
}
79-
```
80-
81-
Ahora a partir de este esqueleto vamos a generar, el json que necesitamos
82-
83-
```diff
84-
{
85-
- "Id": "Policy1671300749955",
86-
+ "Id": "Policy",
87-
"Version": "2012-10-17",
88-
"Statement": [
76+
},
8977
{
90-
- "Sid": "Stmt1671300748312",
78+
"Sid": "Stmt1704786312412",
9179
"Action": [
9280
"s3:PutObject"
9381
],
9482
"Effect": "Allow",
95-
- "Resource": "arn:aws:s3:::${BucketName}/${KeyName}",
96-
+ "Resource": "arn:aws:s3:::${local.s3_bucket_name}/alb-logs/*",
83+
"Resource": "arn:aws:s3:::${BucketName}/${KeyName}",
84+
"Condition": {
85+
"StringEquals": {
86+
"s3:x-amz-acl": "bucket-owner-full-control"
87+
}
88+
},
9789
"Principal": {
9890
"AWS": [
99-
- "elb"
100-
+ "${data.aws_elb_service_account.root.arn}"
91+
"Service"
92+
]
93+
}
94+
},
95+
{
96+
"Sid": "Stmt1704786376560",
97+
"Action": [
98+
"s3:PutObject"
99+
],
100+
"Effect": "Allow",
101+
"Resource": "arn:aws:s3:::${BucketName}/${KeyName}",
102+
"Principal": {
103+
"AWS": [
104+
"elb"
101105
]
102106
}
103107
}
104108
]
105109
}
106110
```
107111

108-
The final `json` looks like:
112+
Ahora a partir de este esqueleto vamos a generar, el json que necesitamos, el recurso del nombre del bucket lo subtituimos por el calculado en `locals`:
113+
114+
```diff
115+
-"Resource": "arn:aws:s3:::${BucketName}/${KeyName}",
116+
+"Resource": "arn:aws:s3:::${local.s3_bucket_name}/alb-logs/*",
117+
```
118+
119+
El resultado final del `json` que debemos aplicar:
109120

110121
```json
111122
{
112-
"Id": "Policy",
113123
"Version": "2012-10-17",
114124
"Statement": [
115125
{
116-
"Action": [
117-
"s3:PutObject"
118-
],
119126
"Effect": "Allow",
120-
"Resource": "arn:aws:s3:::${local.s3_bucket_name}/alb-logs/*",
121127
"Principal": {
122-
"AWS": [
123-
"${data.aws_elb_service_account.root.arn}"
124-
]
128+
"AWS": "${data.aws_elb_service_account.root.arn}"
129+
},
130+
"Action": "s3:PutObject",
131+
"Resource": "arn:aws:s3:::${local.s3_bucket_name}/alb-logs/*"
132+
},
133+
{
134+
"Effect": "Allow",
135+
"Principal": {
136+
"Service": "delivery.logs.amazonaws.com"
137+
},
138+
"Action": "s3:PutObject",
139+
"Resource": "arn:aws:s3:::${local.s3_bucket_name}/alb-logs/*",
140+
"Condition": {
141+
"StringEquals": {
142+
"s3:x-amz-acl": "bucket-owner-full-control"
143+
}
125144
}
145+
},
146+
{
147+
"Effect": "Allow",
148+
"Principal": {
149+
"Service": "delivery.logs.amazonaws.com"
150+
},
151+
"Action": "s3:GetBucketAcl",
152+
"Resource": "arn:aws:s3:::${local.s3_bucket_name}"
126153
}
127154
]
128155
}
156+
129157
```
130158

131159
### Paso 3. Generamos el Bucket
@@ -141,37 +169,46 @@ resource "aws_s3_bucket" "web_bucket" {
141169
tags = local.common_tags
142170
}
143171

144-
# aws_s3_bucket_acl
145-
resource "aws_s3_bucket_acl" "web_bucket_acl" {
146-
bucket = aws_s3_bucket.web_bucket.id
147-
acl = "private"
148-
}
149-
150-
# aws_s3_bucket_policy
151-
resource "aws_s3_bucket_policy" "allow_elb_logging" {
172+
## aws_s3_bucket_policy
173+
resource "aws_s3_bucket_policy" "bucket_policy" {
152174
bucket = aws_s3_bucket.web_bucket.id
153175
policy = <<POLICY
154176
{
155-
"Id": "Policy",
156177
"Version": "2012-10-17",
157178
"Statement": [
158179
{
159-
"Action": [
160-
"s3:PutObject"
161-
],
162180
"Effect": "Allow",
163-
"Resource": "arn:aws:s3:::${local.s3_bucket_name}/alb-logs/*",
164181
"Principal": {
165-
"AWS": [
166-
"${data.aws_elb_service_account.root.arn}"
167-
]
182+
"AWS": "${data.aws_elb_service_account.root.arn}"
183+
},
184+
"Action": "s3:PutObject",
185+
"Resource": "arn:aws:s3:::${local.s3_bucket_name}/alb-logs/*"
186+
},
187+
{
188+
"Effect": "Allow",
189+
"Principal": {
190+
"Service": "delivery.logs.amazonaws.com"
191+
},
192+
"Action": "s3:PutObject",
193+
"Resource": "arn:aws:s3:::${local.s3_bucket_name}/alb-logs/*",
194+
"Condition": {
195+
"StringEquals": {
196+
"s3:x-amz-acl": "bucket-owner-full-control"
197+
}
168198
}
199+
},
200+
{
201+
"Effect": "Allow",
202+
"Principal": {
203+
"Service": "delivery.logs.amazonaws.com"
204+
},
205+
"Action": "s3:GetBucketAcl",
206+
"Resource": "arn:aws:s3:::${local.s3_bucket_name}"
169207
}
170208
]
171209
}
172210
POLICY
173211
}
174-
175212
# aws_iam_role
176213

177214
# aws_iam_role_policy
@@ -181,7 +218,6 @@ resource "aws_s3_bucket_policy" "allow_elb_logging" {
181218
```
182219

183220
* Tomamos el nombre del bucket de local
184-
* Establecemos `acl` como privado
185221
* Establecemos `force_destroy` para que Terraform lo elimine en el `destroy`
186222
* En el `resource` para la `bucket policy`, queremos permitir al balenceador de carga y al `delivery service logs` acceso al bucket de S3. Esto lo hacemos utilizando `Allow`, y vamos a referenciar como principal, nuestra cuenta de servicio `Elastic Load Balancer` desde `data source`. Recordar que esto lo hemos declarado en `loadbalancer.tf`, como la entrada `data "aws_elb_service_account" "root" {}`, este es el `data source` que necesitará referenciar la `service account` usada por el `Elastic Load Balancer` en nuestra región.
187223

0 commit comments

Comments
 (0)