lxpw

joined 1 year ago
MODERATOR OF
 

The first stable release of OpenTofu (the fork of Terraform) has now been released. This release is lagging behind the current 1.6.6 release of Terraform, but it is a big first step. This release is backwards compatible with Terraform 1.6.0 and includes a few new features.

The big new features:

Change log

 

Amazon has finished setting up their second Canadian AWS region. This is big news for anyone in western Canada as regional public cloud coverage has been non-existent, until now. Previously, your only options had been eastern Canada (Montreal) and eastern Canada (Toronto). This is also big news for data sovereignty on AWS. Previously, you didn't have a option for a Canadian disaster recovery region. AWS only had a single Canadian region (ca-central-1), so your DR site would need to be in another country.

To use this region, you will need to enable the region under your billing dashboard as new regions are not enabled by default. This region has 3 AZs, which would be required if you do proper clustering. For the longest time, the ca-central-1 (Montreal) region only had 2 AZs. I remember getting asked in a job interview how many AZs ca-central-1 had and I correctly answered 2. They were convinced all regions had a minimum of 3 AZs and I got docked points. I am still fuming.

Warning: Advanced technical networking and location ramblings below

The new region has 2 - 100G connections to the Calgary Internet Exchange Point IXP (YYCIX). They terminate at Equinix CL-3 and DataHive. I suspect the first AZ is the standalone AWS datacentre just off of Glenmore east whose location had leaked. The second one is probably located west downtown (just outside of the ~~100~~ 25 year flood plain) close to DataHive. The DataHive datacentre is tiny so co-locating an entire Amazon AZ is not happening. Downtown Calgary has plenty of cheap office space for a datacentre conversion.

The third AZ is probably co-located at Arrow DC2 south of the airport or eStruxture CAL-2 up past the airport. Co-location would explain why there isn't a third connection to YYCIX.

As this region is directly connected to YYCIX, this means traffic will not be routing down to the Seattle IXP, unless you yourself are on a local ISP (Shaw) that doesn't connect to YYCIX yet. I didn't believe this rumour was still true, but I did some digging and I am not seeing a YYCIX connection registered for Shaw.

1
OpenTF has been renamed OpenTofu (www.linuxfoundation.org)
submitted 1 year ago* (last edited 1 year ago) by lxpw@lemmy.ca to c/service_clouds@lemmy.ca
 

It looks like the 'TF' part of OpenTF was too similar to Terraform and they have come up with a new name for the project. In addition, the project is now a part of the Linux Foundation and they have a new website.

https://opentofu.org/

1
AWS is having issues (health.aws.amazon.com)
 

us-east-1 and us-west-2 regions are experiencing networking issues and it also having an affect on a number of other cloud services that rely on those regions.

The number of AWS services this is affecting is growing and will probably affect the majority of their services to some degree.

It isn't a full network outage, but instead a sporadic one (too much load?). As in, one ECS task will be able to register itself with the application load-balancer, while another one will not. If you have an automated environment, this is causing rolling failures right now.

This is impacting one of my clients greatly as those are their primary and DR regions. We are considering deploying DR in a 3rd region, but it could take hours to replicate their database.

 

The Github repository for the community fork of Terraform (called OpenTF) has been made public. If you use any third-party tooling (SpaceLift, Scalr, Env0, Terraspace, Terragrunt, Atlantis, Digger, etc.) you will probably want to plan a switch to using OpenTF instead of Terraform to remain license compliant. Well, it is actually more about the third-party tool's compliance. From this point forward, their documentation can't tell you to install a version of Terraform higher then 1.5.5. You will start to see them transition over to suggesting OpenTF instead, once a stable release is available.

OpenTF plans to remain feature compatible with Terraform, but I could see, in the future, new features being added to OpenTF that third-party tool providers require.

I wouldn't compile and use the current OpenTF code for production or even development use yet, but if you wanted to contribute to the project, now is your chance.

https://github.com/opentffoundation/opentf

The first stable release should be coming by October 1st. https://github.com/opentffoundation/opentf/milestone/3

 

CloudFormation is the most featureless of the Infrastructure-as-Code template languages. It is miles behind Terraform, Azure ARM/Bicep, and Google Cloud Deployment Manger. I don't think there has been any direct improvement with the language syntax since the introduction of YAML support over a decade ago. The core syntax and functionality of CloudFormation have been frozen for many years and there is no sign that will ever change. From the outside, it appears the CloudFormation team has been under-resourced and left to rot.

Support for new resource types and properties can take up to 2 years to get implemented. If you are tied to using CloudFormation and need support for a new resource type or property, you are left with creating and maintaining custom resource types (Lambda Functions).

All recent language improvements have been in the AWS::LanguageExtensions transform, which is just an AWS manage CloudFormation Macro, and it was only release last September. CloudFormation Macros are Lambda functions that will run against a template before it is processed. They allow you to interpret your own syntax changes and transform the template before deployment.

Before this looping function support, AWS::LanguageExtensions transform didn't contain any functionality that made it compelling to use. If you were already aware of how to extend CloudFormation, you probably already had a collection of CloudFormation Macros that went above and beyond the functionality of the AWS::LanguageExtensions transform.

Currently, if you want to do anything more advanced than what is built-in, you have to create and maintain your own CloudFormation Macros (more Lambda Function). They are a pain to debug, add a lot more complexity, and increase your maintenance workload. Having AWS provide a macros that greatly extends CloudFormation into something usable would be awesome. We just aren't there yet, but this update shows there is some life left in the corpse.

 

I have been researching the current state of Terraform automation and collaboration tools on behalf of a client and this is a new one that has emerged as a possible option. The client needs something to help manage their many pipelines and state files, but they are not big enough to need a full enterprise Terraform management platform such as Spacelift, Scalr, or Env0. Atlantis was on the short list, but it is showing its age and this is looking to be a better product and a good middle ground solution.

With the recent Hashicorp licensing change, this product may also be impacted. The developers claims they are not using any Hashicorp code and are not affected, but their code does execute a terraform command process which might still run afoul of the "embedded" part of Hashicorp's BSL "Additional Use Grant". Since they are also the creators of the first fork of the MPL licensed Terraform code-base, they will surely be under the watchful eyes of Hashicorp's lawyers.

 

The recent change in licensing across all Hashicorp products shows that Hashicorp is not able to or willing to compete with competitors to their enterprise offerings. Even though they officially don't state it, the change is targeted at competitors such as Spacelift, Scalr, and Env0. Those competitors only came to be to fill in gaps that remained after and because of Hashicorp's lacklustre and overpriced Terraform Cloud/Enterprise products.

The Business Source License (BSL) 1.1 is an open source license, but it has additional vague wording designed to prevent competitors from building competing products using the source code. The problem in this situation is that it also extends to additional products produced by the code owner (Hashicorp). This means even an open-source (non-commercial) competitor to the separate Terraform Enterprise product is not allowed to use the Terraform command, Terraform code-base or any other Hashicorp code-base. Anyone who does any form of Terraform automation, that they then provide to their clients for production use, will now need to ensure they are not seen as a competitor to a Hashicorp product.

Spacelift has already tried to reassure their customers that they are going to work on a solution going forward.

Even though Hashicorp claims to be supportive of the spirit of open source software, they aren't supportive of open collaboration and they have been resistant to upstream contributions from the community. This resistance has created an environment where new enhancement toolsets were created then evolved into competing products with their enterprise offering. Now that they have changed their licensing, this will further exacerbate the issues. A fork of the pre-BSL licensed Terraform code-base has already appeared and if it or another fork gets enough support from the community, we could see the official Terraform toolset being replaced as the defacto Infrastructure-as-Code platform in use today.

I myself have created command wrappers and managements to improve on the limitations of the Terraform command and the lack of state file drift management. So I will be watching what happens closely and be willing to offer my contributions to any potential competitor.

Additional discussions:

Hacker News: HashiCorp adopts Business Source License

Hacker News: OpenTerraform – an MPL fork of Terraform after HashiCorp's license change

 

cross-posted from: https://programming.dev/post/1562654

FYI to all the VS Code peeps out there that malicious extensions can gain access to secrets stored by other VS Code extensions as well as the tokens used by VS Code for Microsoft/Github.

I really don’t understand how Microsoft’s official stance on this is that this is working as intended…

If you weren’t already, be very careful about which extensions you are installing.

 

Aqua Trivy 2.9.0

Trivy is a comprehensive and versatile security scanner. Trivy has scanners that look for security issues, and targets where it can find those issues.

Tags: Security, Vulnerability Scanner, Monitoring

Website - Documentation - Github Home - Github Release

CoreDNS v1.11.0

CoreDNS is a fast and flexible DNS server. The key word here is flexible: with CoreDNS you are able to do what you want with your DNS data by utilizing plugins.

Tags: DNS, Kubernetes

Website - Documentation - Github Home - Github Release

Go v1.21

Go is an open source programming language that makes it easy to build simple, reliable, and efficient software.

Tags: Programming Language, Golang

Website - Documentation - Github Home - Release

Hashicorp Consul v1.16.x

Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.

Website - Documentation - Github Home - Github Release

OpenSearch 2.9.0

OpenSearch is a community-driven, open source fork of Elasticsearch and Kibanasearch. Elasticsearch can be used to search any kind of document. It provides scalable search, has near real-time search, and supports multitenancy. Kibana provides visualization capabilities on top of the content indexed on an Elasticsearch cluster.

Tags: Search Engine, Dashboards, Monitoring

Website - Documentation - Downloads - Github Home - Github Release

Podman v4.6.0

Podman (the POD MANager) is a tool for managing containers and images, volumes mounted into those containers, and pods made from groups of containers. Podman runs containers on Linux, but can also be used on Mac and Windows systems using a Podman-managed virtual machine.

Tags: Docker, Containers, Command-Line

Downloads - Github Home - Github Release

Prometheus 2.46.0 / 2023-07-25

Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed.

Tags: Monitoring, Observability, Dashboards, Metrics, Alerting

Website - Documentation - Downloads - Github Home - Github Release

 

I was wonder how cloud providers seemed to have a bottomless pits of IPv4 addresses and weren't more resistant to handing them out like candy. They should be charging more for this scarce resource. AWS was, until now, the only cloud provider to not charge for static public IPv4 addresses, as long as the elastic IP is in use.

I fully expect there will be more pressure in the future to have cloud customers to use dual-stack (both IPv4 and IPv6) or IPv6 only on externally facing services and pool services behind application load-balancers or web application firewalls (WAFs). WAFs should support sending incoming IP4v and IPv6 traffic to an IPv6 only server.

Looking at Imperva's (a WAF) documentation that should work. I haven't tested this, so I might just have to do that.

By default Imperva handles load balancing of IPv4 and IPv6 as follows:

  • IPv4 traffic is sent to all servers.
  • IPv6 traffic is only sent to the servers that support IPv6.
  • However, if all your servers that support IPv6 are down, then IPv6 traffic is sent to your IPv4 servers.

Imperva also enables you to configure load balancing so that IPv6 traffic is only sent to IPv6 servers and IPv4 traffic is only sent IPv4 servers. Alternatively, you can configure that Imperva sends traffic to any origin server, regardless of whether it is IPv4 or IPv6.

https://docs.imperva.com/bundle/cloud-application-security/page/more/ipv6-support.htm

 

Prometheus will soon include support for ingesting OpenTelementry metrics into the platform. Even if you understood all of those words, you might be asking, "so what?". This is a big deal for observability (fancy name for monitoring) as it is getting us one step closer to using a single agent to collect all observability telemetry (logs, metrics, traces) from servers.

Currently you would need to use something like fluentbit/fluentd to collect logs, Prometheus exporter for metrics, and OpenTelemetry for traces. There are many other tools you might use instead, but these are my current picks. If you are running VMs or physical servers, that means installing/managing three different pieces of software to cover everything. If you are running containers, that could mean up to 3 separate sidecar containers per application container within the same group/task/pod.

OpenTelementry is being positioned as a one-stop-shop for collecting and working with the three streams of telemetry data (logs, metrics, traces). Currently only trace support is production ready, but work is well under way to getting support for logs and metrics to be the ready for prime time.

There has been huge moves across the industry to add support for working with OTLP (OpenTelemetry Protocol) data streams. Prometheus is becoming the most popular backend for storing and alerting on metric data. The current blocker has been native support for OTLP ingestion and incompatible metric naming.

According to this blog post, we are close to getting these 2 issues resolved.

https://last9.io/blog/native-support-for-opentelemetry-metrics-in-prometheus/

view more: next ›